Chapter 2. Top New Features
This section provides an overview of the top new features in this release of Red Hat OpenStack Platform.
2.1. Red Hat Enterprise Linux 8 Features that Affect Red Hat OpenStack Platform 15
This section outlines new features in Red Hat Enterprise Linux 8 that affect Red Hat OpenStack Platform 15.
Red Hat OpenStack Platform 15 now uses Red Hat Enterprise Linux 8 for the operating system. This includes the undercloud node, the overcloud nodes, and containerized services. Some key differences between Red Hat Enterprise Linux 7 and 8 impact the architecture of Red Hat OpenStack Platform 15. The following list provides information on these key differences and their effect on Red Hat OpenStack Platform:
- New Red Hat Enterprise Linux 8 repositories
In addition to the Red Hat OpenStack Platform 15 repository, OpenStack Platform now uses a new set of repositories specific to Red Hat Enterprise Linux 8. This includes the following repositories:
- BaseOS for the main operating system packages.
- AppStream for dependencies such as Python 3 packages and virtualization tools.
- High Availability for Red Hat Enterprise Linux 8 versions of high availability tools.
- Red Hat Ansible Engine for the latest supported version of Ansible engine.
This change affects the repositories you must enable for both the undercloud and overcloud.
- Red Hat Enterprise Linux 8 container images
- All OpenStack Platform 15 container images use Red Hat Enterprise Linux 8 Universal Base Image (UBI) as a basis. OpenStack Platform director automatically configures these container images during undercloud and overcloud creation.
Red Hat does not support running OpenStack Platform containers based on Red Hat Enterprise Linux 7 on a Red Hat Enterprise Linux 8 host.
- Red Hat Enterprise Linux 8 bare metal images
- All OpenStack Platform 15 overcloud kernel, ramdisk, and QCOW2 images use Red Hat Enterprise Linux 8 as a basis. This includes OpenStack Bare Metal (ironic) introspection images.
- Python 3 packages
- All OpenStack Platform 15 services use Python 3 packages.
- New Container Tools
Red Hat Enterprise Linux 8 no longer includes Docker. Instead, Red Hat provides new tools for building and managing containers:
- Pod Manager (Podman) is a container management tool that implements almost all Docker CLI commands, not including commands related to Docker Swarm. Podman manages pods, containers, and container images. One of the major differences between Podman and Docker is Podman can manage resources without a daemon running in the background. For more information on Podman, see the Podman website.
Buildah specializes in building Open Containers Initiative (OCI) images, which you use in conjunction with Podman. Buildah commands replicate what you find in a Dockerfile. Buildah also provides a lower-level
coreutilsinterface to build container images, which allows you to build containers without requiring a Dockerfile. Buildah can also use other scripting languages to build container images without requiring a daemon.
- Docker Registry Replacement
Red Hat Enterprise Linux 8 no longer includes the
docker-distributionpackage, which installed a Docker Registry v2. To maintain the compatibility, OpenStack Platform 15 includes an Apache web server and a virtual host called
image-serve, which provides a container registry. Like
docker-distribution, this registry uses port 8787/TCP with SSL/TLS disabled.
This registry is a read-only container registry and does not support
buildah pushcommands. This means the registry does not allow you to push non-director and non-OpenStack Platform containers. However, you can modify supported OpenStack Platform images with the director’s container preparation workflow, which uses the
- Network Time Synchronization
Red Hat Enterprise Linux 8 no longer includes
ntpdto synchronize your system clock. Red Hat Enterprise Linux 8 now provides
chronydas a replacement service. The director configures
chronydautomatically but be advised that manual time synchronization requires the execution of the
- High Availability and Shared Services
- Pacemaker 2.0 support. This release upgrades the version of Pacemaker to 2.0 to support deployment on top of Red Hat Enterprise Linux 8, including support for Knet and multiple NICs. You can now use the director to configure fencing with Pacemaker for your high availability cluster.
- HAProxy 1.8 support in director. This release upgrades the version of HAProxy to 1.8 to support deployment on top of Red Hat Enterprise Linux 8. You can now use the director to configure HAProxy for your high availability cluster.
2.2. Red Hat OpenStack Platform director
This section outlines the top new features for the director.
- Deploy an all-in-one overcloud
This release includes the capability to deploy standalone overclouds that contain Controller and Compute services. (High availability is not supported.)
Standalone.yamlrole file to deploy all-in-one overclouds.
- Enable and disable services on the all-in-one overcloud
- Customize services with additional environment files.
The following services are enabled by default:
- Nova and related services
- Neutron and related services
- Use the
- Unified composable service templates
This release includes a unified set of composable service templates in
/usr/share/openstack-tripleo-heat-templates/deployment/. These templates merge service configuration from previous template sets:
The containerized templates previously in
The Puppet-based templates previously in
Most service resources, which start with the
OS::TripleO::ServicesHeat namespace, now refer to the unified template set. A few minor services still refer to legacy templates in
- The containerized templates previously in
This section outlines the top new features for the Compute service.
- NUMA-aware vSwitches (full support)
- OpenStack Compute now takes into account the NUMA node location of physical NICs when launching a Compute instance. This helps to reduce latency and improve performance when managing DPDK-enabled interfaces.
- Parameters to control VM status after restarting Compute nodes
The following parameters are now available to determine whether to resume the previous state of virtual machines (VMs) after you restart Compute nodes:
- Using placement service traits to schedule hosts
- Compute nodes now report the CPU instruction set extensions supported by the host, as traits against the resource provider corresponding to that Compute node. An instance can specify which of these traits it requires using image metadata or flavor extra specs. This feature is available when your deployment uses a Libvirt QEMU (x86), Libvirt KVM (x86), or Libvirt KVM (ppc64) virtualization driver. For more information, see Configure Placement Service Traits.
2.4. OpenStack Networking
This section outlines the top new features for the Networking service.
- ML2/OVN Default Plugin
- OVN replaces OVS as the default ML2 mechanism driver. OVN offers a shared networking back end across the RedHat portfolio, giving operators a consistent experience across multiple products. Its architecture offers simpler foundations than OVS, and better performance.
This section outlines the top new features for the Storage service.
- Deprecation of Data Processing service (sahara)
- In Red Hat OpenStack Platform 15, the Data Processing service (sahara) is deprecated. In a future RHOSP release, the the Data Processing service will be removed.
- Simultaneously attach a volume to multiple instances
- In Red Hat OpenStack Platform 15, if the back end driver supports it, you can now simultaneously attach a volume to multiple instances for both the Block Storage service (cinder) and the Compute service (nova). This feature addresses the use case for clustered application workloads that typically requires active/active or active/standby scenarios.
- Image Service - Deploy glance-cache on far-edge nodes
- This feature enables deploying Image service (glance) caching at the edge. This feature improves the boot time for instances and decreases the bandwidth usage between core and edge sites, which is useful in medium to large edge sites to avoid using compute nodes to fetch images from the core site.
2.6. Ceph Storage
This section outlines the top new features for Ceph Storage.
Red Hat Ceph Storage Upgrade To maintain compatibility with Red Hat Enterprise Linux 8, Red Hat OpenStack Platform 15 Beta, director deploys Red Hat Ceph Storage 4 Beta. Red Hat OpenStack Platform 15 running on RHEL8 enables you to connect to a preexisting external Red Hat Ceph Storage 3 cluster running on RHEL7.
2.7. Technology Previews
This section outlines features that are in technology preview in Red Hat OpenStack Platform 15.
For more information on the support scope for features marked as technology previews, see Technology Preview Features Support Scope.
2.7.1. New Technology Previews
- Deploy and manage multiple overclouds from a single undercloud
This release includes the capability to deploy multiple overclouds from a single undercloud.
- Interact with a single undercloud to manage multiple distinct overclouds.
- Switch context on the undercloud to interact with different overclouds.
- Reduce redundant management nodes.
- New Director feature to create an active-active configuration for Block Storage service
With Red Hat OpenStack Platform director you can now deploy the Block Storage service (cinder) in an active-active configuration on Ceph RADOS Block Device (RBD) back ends only, if the back end driver supports this configuration.
The new cinder-volume-active-active.yaml file defines the active-active cluster name by assigning a value to the CinderVolumeCluster parameter. CinderVolumeCluster is a global Block Storage parameter, and prevents you from including clustered (active-active) and non-clustered back ends in the same deployment.
The cinder-volume-active-active.yaml file causes director to use the non-Pacemaker, cinder-volume Orchestration service template, and adds the etcd service to your Red Hat OpenStack Platform deployment as a distributed lock manager (DLM).
- New director parameter for configuring Block Storage service availability zones
- With Red Hat OpenStack Platform director you can now configure different availability zones for Block Storage service (cinder) volume back ends. Director has a new parameter, CinderXXXAvailabilityZone, where XXX is associated with a specific back end.
- New Redfish BIOS management interface for Bare Metal service
Red Hat OpenStack Platform Bare Metal service (ironic) now has a BIOS management interface, with which you can inspect and modify a device’s BIOS configuration.
In Red Hat OpenStack Platform 15, the Bare Metal service supports BIOS management capabilities for data center devices that are Redfish API compliant. The Bare Metal service implements Redfish calls through the Python library, Sushy.
- Using separate heat stacks
You can now use separate heat stacks for different types of nodes. For example, you can have a stack just for the control plane, a stack for Compute nodes, and another stack for Hyper Converged Infrastructure (HCI) nodes. This approach has the following benefits:
- Management - You can modify and manage your nodes without needing to make changes to the control plane stack.
- Scaling out - You do not need to update all nodes just to add more Compute or Storage nodes; the separate heat stack means that these operations are isolated to selected node types.
- Edge sites - You can segment an edge site within its own heat stack, thereby reducing network and management dependencies on the central data center. The edge site must have its own Availability Zone for its Compute and Storage nodes.
- Deploying multiple Ceph clusters
You can use director to deploy multiple Ceph clusters (either on nodes dedicated to running Ceph or hyper-converged), using separate heat stacks for each cluster. For edge sites, you can deploy a hyper-converged infrastructure (HCI) stack that uses Compute and Ceph storage on the same node. For example, you might deploy two edge stacks named
HCI-01`and `HCI-02, each in their own availability zone. As a result, each edge stack has its own Ceph cluster and Compute services.
- New Compute (nova) configuration added to enable memoryBacking Source type file, with shared access
A new Compute (nova) parameter is available,
QemuMemoryBackingDir, which specifies the directory to store the memory backing file in when a libvirt
memoryBackingelement is configured with
memoryBackingelement is only available from libvirt 4.0.0 and QEMU 2.6.0.
- Support added for templated cell mapping URLs
Director now provides cell mapping URLs for the database and message queue URLs as templates, by using variables to represent values such as the username and password. The following properties, defined in the Compute configuration file of the director, specify the variable values:
- Support added for configuring the maximum number of disk devices that can be attached to a single instance
A new Compute (nova) parameter is available,
max_disk_devices_to_attach, which specifies the maximum number of disk devices that can be attached to a single instance. The default is unlimited (-1). The following example illustrates how to change the value of
parameter_defaults: ComputeExtraConfig: nova::config::nova_config: [compute]/max_disk_devices_to_attach: value: '"30"'