Chapter 2. Top New Features

This section provides an overview of the top new features in this release of Red Hat OpenStack Platform.

2.1. Compute

This section outlines the top new features for the Compute service.

Tenant-isolated host aggregates using the Placement service
You can use the Placement service to provide tenant isolation by creating host aggregates that only specific tenants can launch instances on. For more information, see Creating a project-isolated host aggregate.
File-backed memory
You can configure instances to use a local storage device as the memory backing device.

2.2. Distributed Compute Nodes (DCN)

This section outlines the top new features for Distributed Compute Nodes (DCN).

Multi-stack for Distributed Compute Node (DCN)
In Red Hat OpenStack Platform 16.1, you can partition a single overcloud deployment into multiple heat stacks in the undercloud to separate deployment and management operations within a DCN deployment. You can deploy and manage each site in a DCN deployment independently with a distinct heat stack.

2.3. Edge Computing

This section outlines the top new features for edge computing.

Edge features added in Red Hat OpenStack Platform 16.1.2
Edge support in now available for Ansible-based transport layer security everywhere (TLSe), Key Manager service (barbican), and routed provider networks. You can now use an Ansible playbook to pre-cache glance images for edge sites.

2.4. Networking

This section outlines the top new features for the Networking service.

HA support for the Load-balancing service (octavia)
In Red Hat OpenStack Platform 16.1, you can make Load-balancing service (octavia) instances highly available when you implement an active-standby topology and use the amphora provider driver. For more information, see Enabling active-standby topology for Load-balancing service instances in the Using Octavia for Load Balancing-as-a-Service guide.
Load-balancing service (octavia) support for UDP traffic
You can use the Red Hat OpenStack Platform Load-balancing service (octavia) to balance network traffic on UDP ports. For more information, see Creating a UDP load balancer with a health monitor in the Using Octavia for Load Balancing-as-a-Service guide.
Routed provider networks
Starting in Red Hat OpenStack Platform 16.1.1, you can deploy routed provider networks using the ML2/OVS or the SR-IOV mechanism drivers. Routed provider networks are common in edge distributed compute node (DCN) and spine-leaf routed data center deployments. They enable a single provider network to represent multiple layer 2 networks (broadcast domains) or network segments, permitting the operator to present only one network to users. For more information, see Deploying routed provider networks in the Networking Guide.
SR-IOV with native OVN DHCP in ML2/OVN deployments

Starting in Red Hat OpenStack Platform 16.1.1, you can use SR-IOV with native OVN DHCP (no need for neutron DHCP) in ML2/OVN deployments.

For more information, see Enabling SR-IOV with ML2/OVN and Native OVN DHCP and Limits of the ML2/OVN mechanism driver in the Networking Guide.

Northbound path MTU discovery support for jumbo frames

Red Hat OpenStack Platform 16.1.2 introduces MTU discovery to support UDP jumbo frames. After receiving a jumbo UDP frame that exceeds the MTU of the external network, ML2/OVN routers return ICMP "fragmentation needed" packets back to the sending VM. The sending application can then break the payload into smaller packets. Previously, the inability to return ICMP "fragmentation needed" packets resulted in packet loss. For more information about the necessary configuration steps, see Configuring ML2/OVN northbound path MTU discovery for jumbo frame fragmentation in the Advanced Overcloud Customization guide.

Note that in east/west traffic OVN does not support fragmentation of packets that are larger than the smallest MTU on the east/west path.

Example

  • VM1 is on Network1 with an MTU of 1300.
  • VM2 is on NEtwork2 with an MTU of 1200.
  • A ping in either direction between VM1 and VM2 with a size of 1171 or less succeeds. A ping with a size greater than 1171 results in 100 percent packet loss.

    See https://bugzilla.redhat.com/show_bug.cgi?id=1891591.

Load-balancing service instance (amphora) log offloading
By default, Load-balancing service instances (amphorae) store logs on the local machine in the systemd journal. However, starting in Red Hat OpenStack Platform 16.1.2, you can specify that amphorae offload logs to syslog receivers to aggregate both administrative and tenant traffic flow logs. Log offloading enables administrators to go to one location for logs, and retain logs when amphorae are rotated. For more information, see Basics of offloading Load-balancing service instance (amphora) logs in the Using Octavia for Load Balancing-as-a-Service guide.
OVN provider driver for the Load-balancing service (octavia)

Red Hat OpenStack Platform (RHOSP) 16.1.2 introduces full support for the Open Virtual Network (OVN) load-balancing provider, which is a lightweight load-balancer with a basic feature set. Typically used for east-west, layer 4 network traffic, OVN provisions fast and consumes less resources than a full-featured load-balancing provider such as amphora.

Note

Health check functionality is not implemented for the OVN provider driver.

On RHOSP deployments that use the ML2/OVN neutron plug-in, RHOSP director automatically enables the OVN provider driver in the Load-balancing service (octavia), without requiring additional installation or configuration steps. As with all RHOSP deployments, the default load-balancing provider driver, amphora, remains enabled and fully supported. For more information, see Creating an OVN load balancer in the Using Octavia for Load Balancing-as-a-Service guide.

In-place migration from ML2/OVS to ML2/OVN

RHOSP 16.1.1 reintroduces in-place migration of non-NFV deployments from the ML2/OVS mechanism driver to the ML2/OVN mechanism driver. If your existing Red Hat OpenStack Platform (RHOSP) deployment uses the ML2/OVS mechanism driver, you should start now to evaluate the benefits and feasibility of replacing the OVS driver with the ML2/OVN mechanism driver as described in Migrating from ML2/OVS to ML2/OVN.

Note

Red Hat requires that you file a preemptive support case before attempting a migration from ML2/OVS to ML2/OVN. Red Hat does not support migrations without the preemptive support case.

2.5. Storage

This section outlines the top new features for the Storage service.

Storage at the Edge with Distributed Compute Nodes (DCN)

In Red Hat OpenStack Platform 16.1, you can deploy storage at the edge with Distributed Compute Nodes. The following features have been added to support this architecture:

  • Image Service (glance) multi-stores with RBD.
  • Image Service multi-store image import tooling.
  • Block Storage Service (cinder) A/A at the edge.
  • Support for director deployments with multiple Ceph clusters.
Support for Manila CephFS Native
In Red Hat OpenStack Platform 16.1, the Shared Filesystems service (manila) fully supports the Native CephFS driver.
FileStore to BlueStore OSD migration
Starting in Red Hat OpenStack Platform 16.1.2, an Ansible driven workflow migrates Ceph OSDs from FileStore to BlueStore. This means that customers who are using direct-deployed Ceph Storage can complete the Framework for Upgrades (OSP13 to OSP16.1) process.
In-use RBD volume migration
Starting in Red Hat OpenStack Platform 16.1.2, you can migrate or retype RBD in-use cinder volumes from one Ceph pool to another within the same Ceph cluster. See https://bugzilla.redhat.com/show_bug.cgi?id=1293440.

2.6. Bare Metal Service

This section outlines the top new features for the Bare Metal (ironic) service.

Policy-based routing
With this enhancement, you can use policy-based routing for OpenStack nodes to configure multiple route tables and routing rules with os-net-config. Policy-based routing uses route tables where, on a host with multiple links, you can send traffic through a particular interface depending on the source address. You can also define route rules for each interface.

2.7. CloudOps

This section outlines the top new features and changes for the CloudOps components.

Native multiple cloud support
In Service Telemetry Framework (STF) 1.1, multiple cloud support is native in the Service Telemetry Operator. This is provided by the new clouds parameter.
Custom SmartGateway objects
In STF 1.1, the Smart Gateway Operator can directly manage custom SmartGateway objects. You can use the clouds parameter to configure STF-managed cloud instances. You can set the clouds object to an empty set to indicate the Service Telemetry Operator will not manage SmartGateway objects.
SNMP traps
In STF 1.1, delivery of SNMP traps via Alertmanager webhooks has been implemented.

2.8. Network Functions Virtualization

This section outlines the top new features for Network Functions Virtualization (NFV).

Hyper-converged Infrastructure (HCI) deployments with OVS-DPDK
Red Hat OpenStack Platform 16.1 includes support for hyper-coverged infrastructure (HCI) deployments with OVS-DPDK. In an HCI architecture, overcloud nodes with Compute and Ceph Storage services are co-located and configured for optimized resource usage.
Open vSwitch (OVS) hardware offload with OVS-ML2
In Red Hat OpenStack Platform 16.1, the OVS switching function has been offloaded to the SmartNIC hardware. This enhancement reduces the processing resources required, and accelerates the datapath. In Red Hat OpenStack Platform 16.1, this feature has graduated from Technology Preview and is now fully supported. See Configuring OVS hardware offload in the Network Functions Virtualization Planning and Configuration Guide.

2.9. Technology Previews

This section provides an overview of the top new technology previews in this release of Red Hat OpenStack Platform..

Note

For more information on the support scope for features marked as technology previews, see Technology Preview Features Support Scope.

Persistent memory for instances
As a cloud administrator, you can create and configure persistent memory name spaces on Compute nodes that have NVDIMM hardware. Your cloud users can use these nodes to create instances that use the persistent memory name spaces to provide vPMEM.
Memory encryption for instances
As a cloud administrator, you can now configure SEV-capable Compute nodes to provide cloud users the ability to create instances with memory encryption enabled. For more information, see Configuring SEV-capable Compute nodes to provide memory encryption for instances.
Undercloud minion
This release contains the ability to install undercloud minions. An undercloud minion provides additional heat-engine and ironic-conductor services on a separate host. These additional services support the undercloud with orchestration and provisioning operations. The distribution of undercloud operations across multiple hosts provides more resources to run an overcloud deployment, which can result in potentially faster and larger deployments.
Deploying bare metal over IPv6 with director
If you have IPv6 nodes and infrastructure, you can configure the undercloud and the provisioning network to use IPv6 instead of IPv4 so that director can provision and deploy Red Hat OpenStack Platform onto IPv6 nodes. For more information, see Configuring the undercloud for bare metal provisioning over IPv6 and Configuring a custom IPv6 provisioning network. In RHOSP 16.1.2 this feature has graduated from Technology Preview to full support.
Nova-less provisioning

In Red Hat OpenStack Platform 16.1, you can separate the provisioning and deployment stages of your deployment into distinct steps:

  1. Provision your bare metal nodes.

    1. Create a node definition file in yaml format.
    2. Run the provisioning command, including the node definition file.
  2. Deploy your overcloud.

    1. Run the deployment command, including the heat environment file that the provisioning command generates.

The provisioning process provisions your nodes and generates a heat environment file that contains various node specifications, including node count, predictive node placement, custom images, and custom NICs. When you deploy your overcloud, include this file in the deployment command.