Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 2. Requirements

This chapter outlines the main requirements for setting up an environment to provision Red Hat OpenStack Platform using the director. This includes the requirements for setting up the director, accessing it, and the hardware requirements for hosts that the director provisions for OpenStack services.

Note

Prior to deploying Red Hat OpenStack Platform, it is important to consider the characteristics of the available deployment methods. For more information, refer to the Installing and Managing Red Hat OpenStack Platform.

2.1. Environment Requirements

Minimum Requirements:

  • 1 host machine for the Red Hat OpenStack Platform director
  • 1 host machine for a Red Hat OpenStack Platform Compute node
  • 1 host machine for a Red Hat OpenStack Platform Controller node

Recommended Requirements:

  • 1 host machine for the Red Hat OpenStack Platform director
  • 3 host machines for Red Hat OpenStack Platform Compute nodes
  • 3 host machines for Red Hat OpenStack Platform Controller nodes in a cluster
  • 3 host machines for Red Hat Ceph Storage nodes in a cluster

Note the following:

  • It is recommended to use bare metal systems for all nodes. At minimum, the Compute nodes and Ceph Storage nodes require bare metal systems.
  • All overcloud bare metal systems require an Intelligent Platform Management Interface (IPMI). This is because the director controls the power management.
  • Set the internal BIOS clock of each node to UTC. This prevents issues with future-dated file timestamps when hwclock synchronizes the BIOS clock before applying the timezone offset.
  • Red Hat OpenStack Platform has special character encoding requirements as part of the locale settings:

    • Use UTF-8 encoding on all nodes. Ensure the LANG environment variable is set to en_US.UTF-8 on all nodes.
    • Avoid using non-ASCII characters if you use Red Hat Ansible Tower to automate the creation of Red Hat OpenStack Platform resources.
  • To deploy overcloud Compute nodes on POWER (ppc64le) hardware, read the overview in Appendix G, Red Hat OpenStack Platform for POWER.

2.2. Undercloud Requirements

The undercloud system hosting the director provides provisioning and management for all nodes in the overcloud.

  • An 8-core 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
  • A minimum of 16 GB of RAM.

    • The ceph-ansible playbook consumes 1 GB resident set size (RSS) per 10 hosts deployed by the undercloud. If the deployed overcloud will use an existing Ceph cluster, or if it will deploy a new Ceph cluster, then provision undercloud RAM accordingly.
  • A minimum of 100 GB of available disk space on the root disk. This includes:

    • 10 GB for container images
    • 10 GB to accommodate QCOW2 image conversion and caching during the node provisioning process
    • 80 GB+ for general usage, logging, metrics, and growth
  • A minimum of 2 x 1 Gbps Network Interface Cards. However, it is recommended to use a 10 Gbps interface for Provisioning network traffic, especially if provisioning a large number of nodes in your overcloud environment.
  • The latest version of Red Hat Enterprise Linux 7 is installed as the host operating system.
  • SELinux is enabled in Enforcing mode on the host.

2.2.1. Virtualization Support

Red Hat only supports a virtualized undercloud on the following platforms:

PlatformNotes

Kernel-based Virtual Machine (KVM)

Hosted by Red Hat Enterprise Linux 7, as listed on certified hypervisors.

Red Hat Virtualization

Hosted by Red Hat Virtualization 4.x, as listed on certified hypervisors.

Microsoft Hyper-V

Hosted by versions of Hyper-V as listed on the Red Hat Customer Portal Certification Catalogue.

VMware ESX and ESXi

Hosted by versions of ESX and ESXi as listed on the Red Hat Customer Portal Certification Catalogue.

Important

Red Hat OpenStack Platform director requires that the latest version of Red Hat Enterprise Linux 7 is installed as the host operating system. This means your virtualization platform must also support the underlying Red Hat Enterprise Linux version.

Virtual Machine Requirements

Resource requirements for a virtual undercloud are similar to those of a bare metal undercloud. You should consider the various tuning options when provisioning such as network model, guest CPU capabilities, storage backend, storage format, and caching mode.

Network Considerations

Note the following network considerations for your virtualized undercloud:

Power Management
The undercloud VM requires access to the overcloud nodes' power management devices. This is the IP address set for the pm_addr parameter when registering nodes.
Provisioning network
The NIC used for the provisioning (ctlplane) network requires the ability to broadcast and serve DHCP requests to the NICs of the overcloud’s bare metal nodes. As a recommendation, create a bridge that connects the VM’s NIC to the same network as the bare metal NICs.
Note

A common problem occurs when the hypervisor technology blocks the undercloud from transmitting traffic from an unknown address. - If using Red Hat Enterprise Virtualization, disable anti-mac-spoofing to prevent this. - If using VMware ESX or ESXi, allow forged transmits to prevent this. You must power off and on the director VM after you apply these settings. Rebooting the VM is not sufficient.

Example Architecture

This is just an example of a basic undercloud virtualization architecture using a KVM server. It is intended as a foundation you can build on depending on your network and resource requirements.

The KVM host uses two Linux bridges:

br-ex (eth0)
  • Provides outside access to the undercloud
  • DHCP server on outside network assigns network configuration to undercloud using the virtual NIC (eth0)
  • Provides access for the undercloud to access the power management interfaces for the bare metal servers
br-ctlplane (eth1)
  • Connects to the same network as the bare metal overcloud nodes
  • Undercloud fulfills DHCP and PXE boot requests through virtual NIC (eth1)
  • Bare metal servers for the overcloud boot through PXE over this network

For more information on how to create and configure these bridges, see "Configure Network Bridging" in the Red Hat Enterprise Linux 7 Networking Guide.

The KVM host requires the following packages:

$ yum install libvirt-client libvirt-daemon qemu-kvm libvirt-daemon-driver-qemu libvirt-daemon-kvm virt-install bridge-utils rsync virt-viewer

The following command creates the undercloud virtual machine on the KVM host and create two virtual NICs that connect to the respective bridges:

$ virt-install --name undercloud --memory=16384 --vcpus=4 --location /var/lib/libvirt/images/rhel-server-7.5-x86_64-dvd.iso --disk size=100 --network bridge=br-ex --network bridge=br-ctlplane --graphics=vnc --hvm --os-variant=rhel7

This starts a libvirt domain. Connect to it with virt-manager and walk through the install process. Alternatively, you can perform an unattended installation using the following options to include a kickstart file:

--initrd-inject=/root/ks.cfg --extra-args "ks=file:/ks.cfg"

Once installation completes, SSH into the instance as the root user and follow the instructions in Chapter 4, Installing the undercloud

Backups

To back up a virtualized undercloud, there are multiple solutions:

  • Option 1: Follow the instructions in the Back Up and Restore the Director Undercloud Guide.
  • Option 2: Shut down the undercloud and take a copy of the undercloud virtual machine storage backing.
  • Option 3: Take a snapshot of the undercloud VM if your hypervisor supports live or atomic snapshots.

If using a KVM server, use the following procedure to take a snapshot:

  1. Make sure qemu-guest-agent is running on the undercloud guest VM.
  2. Create a live snapshot of the running VM:
$ virsh snapshot-create-as --domain undercloud --disk-only --atomic --quiesce
  1. Take a copy of the (now read-only) QCOW backing file
$ rsync --sparse -avh --progress /var/lib/libvirt/images/undercloud.qcow2 1.qcow2
  1. Merge the QCOW overlay file into the backing file and switch the undercloud VM back to using the original file:
$ virsh blockcommit undercloud vda --active --verbose --pivot

2.3. Networking Requirements

The undercloud host requires at least two networks:

  • Provisioning network - Provides DHCP and PXE boot functions to help discover bare metal systems for use in the overcloud. Typically, this network must use a native VLAN on a trunked interface so that the director serves PXE boot and DHCP requests. Some server hardware BIOSes support PXE boot from a VLAN, but the BIOS must also support translating that VLAN into a native VLAN after booting, otherwise the undercloud will not be reachable. Currently, only a small subset of server hardware fully supports this feature. This is also the network you use to control power management through Intelligent Platform Management Interface (IPMI) on all overcloud nodes.
  • External Network - A separate network for external access to the overcloud and undercloud. The interface connecting to this network requires a routable IP address, either defined statically, or dynamically through an external DHCP service.

This represents the minimum number of networks required. However, the director can isolate other Red Hat OpenStack Platform network traffic into other networks. Red Hat OpenStack Platform supports both physical interfaces and tagged VLANs for network isolation.

Note the following:

  • Typical minimal overcloud network configuration can include:

    • Single NIC configuration - One NIC for the Provisioning network on the native VLAN and tagged VLANs that use subnets for the different overcloud network types.
    • Dual NIC configuration - One NIC for the Provisioning network and the other NIC for the External network.
    • Dual NIC configuration - One NIC for the Provisioning network on the native VLAN and the other NIC for tagged VLANs that use subnets for the different overcloud network types.
    • Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type.
  • Additional physical NICs can be used for isolating individual networks, creating bonded interfaces, or for delegating tagged VLAN traffic.
  • If using VLANs to isolate your network traffic types, use a switch that supports 802.1Q standards to provide tagged VLANs.
  • During the overcloud creation, you will refer to NICs using a single name across all overcloud machines. Ideally, you should use the same NIC on each overcloud node for each respective network to avoid confusion. For example, use the primary NIC for the Provisioning network and the secondary NIC for the OpenStack services.
  • Make sure the Provisioning network NIC is not the same NIC used for remote connectivity on the director machine. The director installation creates a bridge using the Provisioning NIC, which drops any remote connections. Use the External NIC for remote connections to the director system.
  • The Provisioning network requires an IP range that fits your environment size. Use the following guidelines to determine the total number of IP addresses to include in this range:

    • Include at least one IP address per node connected to the Provisioning network.
    • If planning a high availability configuration, include an extra IP address for the virtual IP of the cluster.
    • Include additional IP addresses within the range for scaling the environment.

      Note

      Duplicate IP addresses should be avoided on the Provisioning network. For more information, see Section 3.2, “Planning Networks”.

      Note

      For more information on planning your IP address usage, for example, for storage, provider, and tenant networks, see the Networking Guide.

  • Set all overcloud systems to PXE boot off the Provisioning NIC, and disable PXE boot on the External NIC (and any other NICs on the system). Also ensure that the Provisioning NIC has PXE boot at the top of the boot order, ahead of hard disks and CD/DVD drives.
  • All overcloud bare metal systems require a supported power management interface, such as an Intelligent Platform Management Interface (IPMI). This allows the director to control the power management of each node.
  • Make a note of the following details for each overcloud system: the MAC address of the Provisioning NIC, the IP address of the IPMI NIC, IPMI username, and IPMI password. This information will be useful later when setting up the overcloud nodes.
  • If an instance needs to be accessible from the external internet, you can allocate a floating IP address from a public network and associate it with an instance. The instance still retains its private IP but network traffic uses NAT to traverse through to the floating IP address. Note that a floating IP address can only be assigned to a single instance rather than multiple private IP addresses. However, the floating IP address is reserved only for use by a single tenant, allowing the tenant to associate or disassociate with a particular instance as required. This configuration exposes your infrastructure to the external internet. As a result, you might need to check that you are following suitable security practices.
  • To mitigate the risk of network loops in Open vSwitch, only a single interface or a single bond may be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges.
  • It is recommended to use DNS hostname resolution so that your overcloud nodes can connect to external services, such as the Red Hat Content Delivery Network and network time servers.
  • To prevent a Controller node network card or network switch failure disrupting overcloud services availability, ensure that the keystone admin endpoint is located on a network that uses bonded network cards or networking hardware redundancy. If you move the keystone endpoint to a different network, such as internal_api, ensure that the undercloud can reach the VLAN or subnet. For more information, see the Red Hat Knowledgebase solution How to migrate Keystone Admin Endpoint to internal_api network.
Important

Your OpenStack Platform implementation is only as secure as its environment. Follow good security principles in your networking environment to ensure that network access is properly controlled. For example:

  • Use network segmentation to mitigate network movement and isolate sensitive data; a flat network is much less secure.
  • Restrict services access and ports to a minimum.
  • Ensure proper firewall rules and password usage.
  • Ensure that SELinux is enabled.

For details on securing your system, see:

2.4. Overcloud Requirements

The following sections detail the requirements for individual systems and nodes in the overcloud installation.

2.4.1. Compute Node Requirements

Compute nodes are responsible for running virtual machine instances after they are launched. Compute nodes must support hardware virtualization. Compute nodes must also have enough memory and disk space to support the requirements of the virtual machine instances they host.

Processor
  • 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended this processor has a minimum of 4 cores.
  • IBM POWER 8 processor.
Memory
A minimum of 6 GB of RAM. Add additional RAM to this requirement based on the amount of memory that you intend to make available to virtual machine instances.
Disk Space
A minimum of 50 GB of available disk space.
Network Interface Cards
A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
Power Management
Each Compute node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server’s motherboard.

2.4.2. Controller Node Requirements

Controller nodes are responsible for hosting the core services in a Red Hat OpenStack Platform environment, such as the Horizon dashboard, the back-end database server, Keystone authentication, and High Availability services.

Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory

Minimum amount of memory is 32 GB. However, the amount of recommended memory depends on the number of vCPUs (which is based on CPU cores multiplied by hyper-threading value). Use the following calculations as guidance:

  • Controller RAM minimum calculation:

    • Use 1.5 GB of memory per vCPU. For example, a machine with 48 vCPUs should have 72 GB of RAM.
  • Controller RAM recommended calculation:

    • Use 3 GB of memory per vCPU. For example, a machine with 48 vCPUs should have 144 GB of RAM

For more information on measuring memory requirements, see "Red Hat OpenStack Platform Hardware Requirements for Highly Available Controllers" on the Red Hat Customer Portal.

Disk Storage and Layout

A minimum amount of 50 GB storage is required, if the Object Storage service (swift) is not running on the controller nodes. However, the Telemetry (gnocchi) and Object Storage services are both installed on the Controller, with both configured to use the root disk. These defaults are suitable for deploying small overclouds built on commodity hardware; such environments are typical of proof-of-concept and test environments. These defaults also allow the deployment of overclouds with minimal planning but offer little in terms of workload capacity and performance.

In an enterprise environment, however, this could cause a significant bottleneck, as Telemetry accesses storage constantly. This results in heavy disk I/O usage, which severely impacts the performance of all other Controller services. In this type of environment, you need to plan your overcloud and configure it accordingly.

Red Hat provides several configuration recommendations for both Telemetry and Object Storage. See Deployment Recommendations for Specific Red Hat OpenStack Platform Services for details.

Network Interface Cards
A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
Power Management
Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server’s motherboard.

2.4.2.1. Virtualization Support

Red Hat only supports virtualized controller nodes on Red Hat Virtualization platforms. See Virtualized control planes for details.

2.4.3. Ceph Storage Node Requirements

Ceph Storage nodes are responsible for providing object storage in a Red Hat OpenStack Platform environment.

Placement Groups
Ceph uses Placement Groups to facilitate dynamic and efficient object tracking at scale. In the case of OSD failure or cluster re-balancing, Ceph can move or replicate a placement group and its contents, which means a Ceph cluster can re-balance and recover efficiently. The default Placement Group count that Director creates is not always optimal so it is important to calculate the correct Placement Group count according to your requirements. You can use the Placement Group calculator to calculate the correct count: Ceph Placement Groups (PGs) per Pool Calculator
Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory
Red Hat typically recommends a baseline of 16GB of RAM per OSD host, with an additional 2 GB of RAM per OSD daemon.
Disk Layout

Sizing is dependant on your storage need. The recommended Red Hat Ceph Storage node configuration requires at least three or more disks in a layout similar to the following:

  • /dev/sda - The root disk. The director copies the main Overcloud image to the disk. This should be at minimum 50 GB of available disk space.
  • /dev/sdb - The journal disk. This disk divides into partitions for Ceph OSD journals. For example, /dev/sdb1, /dev/sdb2, /dev/sdb3, and onward. The journal disk is usually a solid state drive (SSD) to aid with system performance.
  • /dev/sdc and onward - The OSD disks. Use as many disks as necessary for your storage requirements.

    Note

    Red Hat OpenStack Platform director uses ceph-ansible, which does not support installing the OSD on the root disk of Ceph Storage nodes. This means you need at least two or more disks for a supported Ceph Storage node.

Network Interface Cards
A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. It is recommended to use a 10 Gbps interface for storage node, especially if creating an OpenStack Platform environment that serves a high volume of traffic.
Power Management
Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the motherboard of the server.
Image Properties
To help improve Red Hat Ceph Storage block device performance, you can configure the Glance image to use the virtio-scsi driver. For more information about recommended image properties for images, see Congfiguring Glance in the Red Hat Ceph Storage documentation.

See the Deploying an Overcloud with Containerized Red Hat Ceph guide for more information about installing an overcloud with a Ceph Storage cluster.

2.4.4. Object Storage Node Requirements

Object Storage nodes provides an object storage layer for the overcloud. The Object Storage proxy is installed on Controller nodes. The storage layer will require bare metal nodes with multiple number of disks per node.

Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory
Memory requirements depend on the amount of storage space. Ideally, use at minimum 1 GB of memory per 1 TB of hard disk space. For optimal performance, it is recommended to use 2 GB per 1 TB of hard disk space, especially for small file (less 100GB) workloads.
Disk Space

Storage requirements depends on the capacity needed for the workload. It is recommended to use SSD drives to store the account and container data. The capacity ratio of account and container data to objects is of about 1 per cent. For example, for every 100TB of hard drive capacity, provide 1TB of SSD capacity for account and container data.

However, this depends on the type of stored data. If STORING mostly small objects, provide more SSD space. For large objects (videos, backups), use less SSD space.

Disk Layout

The recommended node configuration requires a disk layout similar to the following:

  • /dev/sda - The root disk. The director copies the main overcloud image to the disk.
  • /dev/sdb - Used for account data.
  • /dev/sdc - Used for container data.
  • /dev/sdd and onward - The object server disks. Use as many disks as necessary for your storage requirements.
Network Interface Cards
A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
Power Management
Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server’s motherboard.

2.5. Repository Requirements

Both the undercloud and overcloud require access to Red Hat repositories either through the Red Hat Content Delivery Network (CDN), or through Red Hat Satellite Server 5 or Red Hat Satellite Server 6. If you want to Red Hat Satellite Server, you must synchronize the required repositories to your OpenStack Platform environment. Use the following list of CDN channel names as a guide:

Table 2.1. OpenStack Platform Repositories

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux 7 Server (RPMs)

rhel-7-server-rpms

Base operating system repository for x86_64 systems.

Red Hat Enterprise Linux 7 Server - Extras (RPMs)

rhel-7-server-extras-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat Enterprise Linux 7 Server - RH Common (RPMs)

rhel-7-server-rh-common-rpms

Contains tools for deploying and configuring Red Hat OpenStack Platform.

Red Hat Satellite Tools 6.3 (for RHEL 7 Server) (RPMs) x86_64

rhel-7-server-satellite-tools-6.3-rpms

Tools for managing hosts with Red Hat Satellite Server 6. Note that using later versions of the Satellite Tools repository might cause the undercloud installation to fail.

Red Hat Enterprise Linux High Availability (for RHEL 7 Server) (RPMs)

rhel-ha-for-rhel-7-server-rpms

High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability.

Red Hat OpenStack Platform 13 for RHEL 7 (RPMs)

rhel-7-server-openstack-13-rpms

Core Red Hat OpenStack Platform repository. Also contains packages for Red Hat OpenStack Platform director.

Red Hat Ceph Storage OSD 3 for Red Hat Enterprise Linux 7 Server (RPMs)

rhel-7-server-rhceph-3-osd-rpms

(For Ceph Storage Nodes) Repository for Ceph Storage Object Storage daemon. Installed on Ceph Storage nodes.

Red Hat Ceph Storage MON 3 for Red Hat Enterprise Linux 7 Server (RPMs)

rhel-7-server-rhceph-3-mon-rpms

(For Ceph Storage Nodes) Repository for Ceph Storage Monitor daemon. Installed on Controller nodes in OpenStack environments using Ceph Storage nodes.

Red Hat Ceph Storage Tools 3 for Red Hat Enterprise Linux 7 Server (RPMs)

rhel-7-server-rhceph-3-tools-rpms

Provides tools for nodes to communicate with the Ceph Storage cluster. Enable this repository for all nodes when you deploy an overcloud with a Ceph Storage cluster or when you integrate your overcloud with an existing Ceph Storage cluster.

Red Hat OpenStack 13 Director Deployment Tools for RHEL 7 (RPMs)

rhel-7-server-openstack-13-deployment-tools-rpms

(For Ceph Storage Nodes) Provides a set of deployment tools that are compatible with the current version of Red Hat OpenStack Platform director. Installed on Ceph nodes without an active Red Hat OpenStack Platform subscription.

Enterprise Linux for Real Time for NFV (RHEL 7 Server) (RPMs)

rhel-7-server-nfv-rpms

Repository for Real Time KVM (RT-KVM) for NFV. Contains packages to enable the real time kernel. This repository should be enabled for all Compute nodes targeted for RT-KVM. NOTE: You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU before you can access this repository.

Red Hat OpenStack Platform 13 Extended Life Cycle Support for RHEL 7 (RPMs)

rhel-7-server-openstack-13-els-rpms

Contains updates for Extended Live Cycle Support which began on June 26, 2021. You need the "Entitlement for OpenStack 13 Platform Extended Life Cycle Support" (MCT3637) for this repository to be available.

OpenStack Platform Repositories for IBM POWER

These repositories are used for in the Appendix G, Red Hat OpenStack Platform for POWER feature.

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux for IBM Power, little endian

rhel-7-for-power-le-rpms

Base operating system repository for ppc64le systems.

Red Hat OpenStack Platform 13 for RHEL 7 (RPMs)

rhel-7-server-openstack-13-for-power-le-rpms

Core Red Hat OpenStack Platform repository for ppc64le systems.

Note

To configure repositories for your Red Hat OpenStack Platform environment in an offline network, see "Configuring Red Hat OpenStack Platform Director in an Offline Environment" on the Red Hat Customer Portal.