Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 2. Top New Features

This section provides an overview of the top new features in this release of Red Hat OpenStack Platform.

2.1. Red Hat OpenStack Platform Director

This section outlines the top new features for the director.

Fast forward upgrades
The director provides a fast forward upgrade path through multiple versions, specifically from Red Hat OpenStack Platform 10 to Red Hat OpenStack Platform 13. The goal is to provide users an opportunity to remain on certain OpenStack versions that are considered long life versions and upgrade when the next long life version is available. Full instructions are available in the Fast Forward Upgrades Guide.
Red Hat Virtualization control plane
The director now supports provisioning an overcloud using Controller nodes deployed in Red Hat Virtualization. For more information about new virtualization features, see Virtualize your OpenStack control plane with Red Hat Virtualization and Red Hat OpenStack Platform 13.

2.2. Containers

This section outlines the top new features for containerization in Red Hat OpenStack Platform.

Fully containerized services
The release provides all Red Hat OpenStack Platform services as containers, including services that were not containerized in the previous version: OpenStack Networking (neutron), OpenStack Block Storage (cinder), and OpenStack Shared File Systems (manila). The overcloud now uses fully containerized services.

2.3. Bare Metal Service

This section outlines the top new features for the Bare Metal (ironic) service.

L3 routed spine-leaf network
The director includes the capability to define multiple networks for provisioning and introspection functions. This feature, in conjunction with composable networks, allows users to provision and configure a complete L3 routed spine-leaf architecture for the overcloud. Full instructions are available in the Spine Leaf Networking Guide.
Red Hat Virtualization driver
The director OpenStack Bare Metal (ironic) service includes a driver (staging-ovirt) to manage virtual nodes within a Red Hat Virtualization environment.
Red Hat OpenStack Platform for POWER
You can now deploy pre-provisioned overcloud Compute nodes on IBM POWER8 little endian hardware.

2.4. Ceph Storage

This section outlines the top new features for Ceph Storage.

Red Hat Ceph Storage 3.0 support
With this release, Red Hat Ceph Storage 3.0 (luminous) is the default supported version of Ceph for Red Hat OpenStack and is the default version deployed by director. Ceph now supports rolling upgrades from version 2.x to 3. External clusters (those not deployed by director) running Red Hat Ceph Storage 2.x (Jewel) will remain compatible with the newer Ceph client. Upgrading to the new OpenStack release also upgrades Red Hat Ceph Storage to 3.0 if your Ceph cluster was deployed using director.
Scale out Ceph Metadata Server and RADOS Gateway nodes
Red Hat Ceph Storage 3.0 adds support for scaling metadata load across multiple metadata servers (MDS) by appropriate configuration of the Ceph File System (CephFS). Once configured, extra dedicated MDS servers available in your Ceph cluster are automatically assigned to take on this extra load. Additionally, new dedicated Ceph RADOS Gateway (RGW) nodes can be added, allowing RGW to scale up as needed.
Manila CephFS storage with NFS
The Shared File System service (manila) supports mounting shared file systems backed by a Ceph File System (CephFS) via the NFSv4 protocol. NFS-Ganesha servers operating on Controller nodes are used to export CephFS to tenants with High Availability (HA). Tenants are isolated from one another and may only access CephFS through the provided NFS gateway interface. This new feature is fully integrated into director, thereby enabling CephFS back end deployment and configuration for the Shared File System service.
Enhanced multiple Cinder Ceph pools support
Block Storage (cinder) RADOS block device (RBD) back ends can be mapped to different pools within the same Ceph cluster using a director template parameter, CinderRbdExtraPools. A new Block Storage RBD back end is created for each Ceph pool associated with this parameter, in addition to the standard RBD back end associated with the CinderRbdPoolName parameter.
RBD mirror director with ceph-ansible
The Ceph rbd-mirror daemon pulls image updates from a remote cluster and applies them to the image within a local cluster. RBD mirror is deployed as a container using ceph-ansible with Red Hat Ceph Storage 3.0 (luminous). OpenStack metadata related to the image is not copied by rbd-mirror.

2.5. Compute

This section outlines the top new features for the Compute service.

Real-Time KVM integration

Integration of real time KVM (RT-KVM) with the Compute service is now fully supported. RT-KVM benefits are:

  • Deterministic and low average latency for system calls and interrupts.
  • Precision Time Protocol (PTP) support in the guest instance for accurate clock synchronization (community support for this release).

2.6. High Availability

This section outlines the top new features for high availability.

Director integration for Instance HA
You can now deploy Instance HA with the director. This allows you to configure installation and upgrade for Instance HA without further manual steps.
Note

Director integration for Instance HA is available only from version 13 and later. To upgrade from previous versions to version 13, including fast-forward upgrades, you must first manually disable Instance HA.

2.7. Metrics and Monitoring

This section outlines the top new features and changes for the metrics and monitoring components.

collectd 5.8 integration

The collectd 5.8 version includes the following additional plugins:

  • ovs-stats - The plugin collects the statistics of OVS connected bridges and interfaces.
  • ovs-events - The plugin monitors the link status of Open vSwitch (OVS) connected interfaces, dispatches the values to collectd, and sends the notification whenever the link state change occurs in the OVS database.
  • hugepages - The hugepages plugin allows the monitoring of free and used hugepages by numbers, bytes, or percentage on a platform.
  • intel-rdt - The intel_rdt plugin collects information provided by monitoring features of Intel Resource Director Technology (Intel® RDT) like Cache Monitoring Technology (CMT), Memory Bandwidth Monitoring (MBM). These features provide information about shared resource usage such as last level cache occupancy, local memory bandwidth usage, remote memory bandwidth usage, and instructions per clock.
  • libvirt plugin extension - The libvirt plugin is extended to support CMT, MBM, CPU Pinning, Utilization, and State metrics on the platform.
collectd and gnocchi integration

The collectd-gnocchi plugin sends the metrics to gnocchi. By default, it creates a resource type named collectd and a new resource for each host monitored.

Each host has a list of metrics created dynamically using the following naming convention:

plugin-plugin_instance/type-type_instance-value_number

For the metrics to be created properly, ensure that the archive policy rules match.

Support sensu with multiple RabbitMQ servers
With this release, the Red Hat OpenStack Platform adds support to sensu with multiple RabbitMQ servers. To achieve this, use the MonitoringRabbitCluster parameter in the config.yaml file.
Intel Resource Director Technology/Memory Bandwidth Monitoring support
Memory Bandwidth Monitoring (MBM) is an integral part of the Intel® Resource Director Technology (RDT). Memory usage and availability is gathered from all the nodes and made available to OpenStack to make better scheduling decisions and deliver on SLAs.
Removal of Telemetry API and ceilometer-collector
The Telemetry API service is replaced by the OpenStack Telemetry Metrics (gnocchi) service and the OpenStack Telemetry Alarming (aodh) service APIs. The ceilometer-collector service is replaced by the ceilometer-notification-agent daemon because the Telemetry polling agent sends the messages from the sample file to the ceilometer-notification-agent daemon.
Note

Ceilometer as a whole is not deprecated, just the Telemetry API service and the ceilometer-collector service.

2.8. Network Functions Virtualization

This section outlines the top new features for Network Functions Virtualization (NFV).

Real-Time KVM Compute role for NFV workloads
The real-time KVM (RT-KVM) Compute nodes now support NFV workloads, with the addition of a RT-KVM Compute node role. This new role exposes a subset of Compute nodes with real-time capabilities to support guests with stringent latency requirements.

2.9. OpenDaylight

This section outlines the top new features for the OpenDaylight service.

OpenDaylight integration

OpenDaylight is a flexible, modular, and open SDN platform, that is now fully supported with this Red Hat OpenStack Platform release. The current Red Hat offering combines carefully selected OpenDaylight components that are designed to enable the OpenDaylight SDN controller as a networking backend for OpenStack. The key OpenDaylight project used in this solution is NetVirt, with support for the OpenStack neutron API.

The following features are included:

  • Date Plane Abstraction: A P4 plug-in for the platform.
  • Containers: A plug-in for Kubernetes, as well as development of Neutron Northbound extensions for mixed VM-container environments.

For more information, see the Red Hat OpenDaylight Product Guide and the Red Hat OpenDaylight Installation and Configuration Guide.

2.10. OpenStack Networking

This section outlines the top new features for the Networking service.

Octavia LBaaS
Octavia is now fully supported. Octavia is an official OpenStack project that provides load balancing capabilities and is intended to replace the current HAProxy-based implementation. Octavia implements the LBaaS v2 API, but also provides additional features. Octavia includes a reference load balancing driver that provides load balancing with amphora (implemented as Compute VMs).
Open Virtual Network (OVN)
OVN is now fully supported. OVN is an Open vSwitch-based network virtualization solution for supplying network services to instances. OVN fully supports the neutron API.

2.11. Security

This section outlines the top new features for security components.

Barbican
OpenStack Key Manager (barbican) is a secrets manager for Red Hat OpenStack Platform. You can use the barbican API and command line to centrally manage the certificates, keys, and passwords used by OpenStack services.
Barbican - Support for encrypted volumes
You can use barbican to manage your Block Storage (cinder) encryption keys. This configuration uses LUKS to encrypt the disks attached to your instances, including boot disks. The key management aspect is performed transparently to the user.
Barbican - glance image signing
You can configure the Image Service (glance) to verify that an uploaded image has not been tampered with. The image is first signed with a key that is stored in barbican, with the image then being validated before each use.
Integration with Policy Decision Points (PDP)
For customers that rely on Policy Decision Points (PDP) to control access to resources, Identity Service (keystone) can now integrate projects with an external PDP for authorization checks. The external PDP can evaluate access requests and can grant or deny access based on established policy.
Infrastructure and virtualization hardening
AIDE Intrusion detection is now available under tech preview. The director’s AIDE service allows an operator to centrally set their intrusion detection ruleset and then install and setup AIDE on the overcloud.

2.12. Storage

This section outlines the top new features for storage components.

Block Storage - Containerized deployment of the Block Storage service
Containerized deployment of the Block Storage service (cinder) is now the default in this release. If you use a back end for these services that has external installation dependencies, you must obtain vendor-specific containers for your deployment.
Block Storage - Multi-back end availability zones
The Block Storage service (cinder) now allows back end availability zones to be defined using a new driver configuration option, backend_availability_zone, in the back end sections of the configuration file. In previous versions, back ends configured in a cinder-volume had to be part of the same storage availability zone.
Block Storage - OpenStack Key Manager support
The Block Storage service (cinder) can now use the OpenStack Key Manager (barbican) to store encryption keys used for volume encryption. This feature is enabled by configuring the OpenStack Key Manager in director. New keys can be added to the OpenStack Key Manager by users with the admin or creator roles by Identity Service (keystone).
Block Storage - RBD driver encryption support
The RBD driver now handles Block Storage service (cinder) volume encryption using LUKS. This feature provides the capability to encrypt volumes on RBD using the Block Storage service and Compute service, providing data-at-rest security. The OpenStack Key Manager (barbican) is required to use RBD driver encryption. RBD driver encryption is only supported for the Block Storage service.
Image Service - Image signing and verification support
The Image Service (glance) now provides signing and signature validation of bootable images using OpenStack Key Manager (barbican). Image signatures are now verified prior to storing the image. You must add an encryption signature to the original image before uploading it to the Image Service. This signature is used to validate the image upon booting. OpenStack Key Manager provides key management support for signing keys.
Object Storage - At-rest encryption and OpenStack Key Manager support
The Object Storage (swift) service can now store objects in encrypted form using AES in CTR mode with 256-bit keys stored in the OpenStack Key Manager (barbican). Once encryption is enabled for Object Storage using director, the system creates a single key used to encrypt all objects in the cluster. This provides options for protecting objects and maintaining security compliance in Object Storage clusters.
Shared File System - Containerized deployment of the Shared File System service
Containerized deployment of the Shared File System service (manila) is now the default in this release. If you use a back end for these services that has external installation dependencies, you must obtain vendor-specific containers for your deployment.
Shared File System - IPv6 access rule support with NetApp ONTAP cDOT driver
The Shared File System service (manila) now supports exporting shares backed by NetApp ONTAP back ends over IPv6 networks. Access to the exported shares is controlled by IPv6 client addresses.
Shared File System - Manila CephFS storage with NFS
The Shared File System service (manila) supports mounting shared file systems backed by a Ceph File System (CephFS) via the NFSv4 protocol. NFS-Ganesha servers operating on Controller nodes are used to export CephFS to tenants with High Availability (HA). Tenants are isolated from one another and may only access CephFS through the provided NFS gateway interface. This new feature is fully integrated into director, thereby enabling CephFS back end deployment and configuration for the Shared File System service.

2.13. Technology Previews

This section outlines features that are in technology preview in Red Hat OpenStack Platform 13.

Note

For more information on the support scope for features marked as technology previews, see Technology Preview Features Support Scope.

2.13.1. New Technology Previews

The following new features are provided as technology previews:

Ansible-based configuration (config download)
The director can now generate a set of Ansible playbooks using an overcloud plan as a basis. This changes the overcloud configuration method from OpenStack Orchestration (heat) to an Ansible-based method. Some supported OpenStack Platform 13 features, such as upgrades, use this feature as part of their processes. However, usage outside of these supported areas is not recommended for production and only available as a technology preview.
OVS hardware offload
Open vSwitch (OVS) hardware offload accelerates OVS by moving heavy processing to hardware with SmartNICs. This saves host resources by offloading the OVS processing to the SmartNIC.

2.13.2. Previously Released Technology Previews

The following features remain as technology previews:

Benchmarking service

Rally is a benchmarking tool that automates and unifies multi-node OpenStack deployment, cloud verification, benchmarking, and profiling. It can be used as a basic tool for an OpenStack CI/CD system that would continuously improve its SLA, performance, and stability. It consists of the following core components:

  • Server Providers - provide a unified interface for interaction with different virtualization technologies (LXS, Virsh etc.) and cloud suppliers. It does so via ssh access and in one L3 network.
  • Deploy Engines - deploy an OpenStack distribution before any benchmarking procedures take place, using servers retrieved from Server Providers.
  • Verification - runs specific set of tests against the deployed cloud to check that it works correctly, collects results and presents them in human readable form.
  • Benchmark Engine - allows you to write parameterized benchmark scenarios and run them against the cloud.
Benchmarking service - introduction of a new plug-in type: hooks
Allows test scenarios to run as iterations, and provides timestamps (and other information) about executed actions in the rally report.
Benchmarking service - new scenarios
Benchmarking scenarios have been added for nova, cinder, magnum, ceilometer, manila, and neutron.
Benchmarking service - refactor of the verification component
Rally Verify is used to launch Tempest. It was refactored to cover a new model: verifier type, verifier, and verification results.
Cells
OpenStack Compute includes the concept of Cells, provided by the nova-cells package, for dividing computing resources. In this release, Cells v1 has been replaced by Cells v2. Red Hat OpenStack Platform deploys a "cell of one" as a default configuration, but does not support multi-cell deployments at this time.
DNS-as-a-Service (DNSaaS)
DNS-as-a-Service (DNSaaS), also known as Designate, includes a REST API for domain and record management, is multi-tenanted, and integrates with OpenStack Identity Service (keystone) for authentication. DNSaaS includes a framework for integration with Compute (nova) and OpenStack Networking (neutron) notifications, allowing auto-generated DNS records. DNSaaS includes integration with the Bind9 back end.
Firewall-as-a-Service (FWaaS)
The Firewall-as-a-Service plug-in adds perimeter firewall management to OpenStack Networking (neutron). FWaaS uses iptables to apply firewall policy to all virtual routers within a project and supports one firewall policy and logical firewall instance per project. FWaaS operates at the perimeter by filtering traffic at the OpenStack Networking (neutron) router. This distinguishes it from security groups, which operate at the instance level.
Google Cloud storage backup driver (Block Storage)
The Block Storage (cinder) service can now be configured to use Google Cloud Storage for storing volume backups. This feature presents an alternative to the costly maintenance of a secondary cloud simply for disaster recovery.
Link aggregation for bare metal nodes

This release introduces link aggregation for bare metal nodes. Link aggregation allows you to configure bonding on your bare metal node NICs to support failover and load balancing. This feature requires specific hardware switch vendor support that can be configured from a dedicated neutron plug-in. Verify that your hardware vendor switch supports the correct neutron plug-in.

Alternatively, you can manually preconfigure switches to have bonds set up for the bare metal nodes. To enable nodes to boot off one of the bond interfaces, the switches need to support both LACP and LACP fallback (bond links fall back to individual links if a bond is not formed). Otherwise, the nodes will also need a separate provisioning and cleaning network.

Red Hat SSO
This release includes a version of the keycloak-httpd-client-install package. This package provides a command-line tool that helps configure the Apache mod_auth_mellon SAML Service Provider as a client of the Keycloak SAML IdP.