Release Notes

Red Hat OpenStack Platform 16.0-Beta

Release details for Red Hat OpenStack Platform 16

OpenStack Documentation Team

Red Hat Customer Content Services

Abstract

This document outlines the major features, enhancements, and known issues in this release of Red Hat OpenStack Platform.

Chapter 1. Introduction

1.1. About this Release

This beta release of Red Hat OpenStack Platform is based on the OpenStack "Train" release. It includes additional features, known issues, and resolved issues specific to Red Hat OpenStack Platform.

Only changes specific to Red Hat OpenStack Platform are included in this document. The release notes for the OpenStack "Train" release itself are available at the following location: https://releases.openstack.org/train/index.html.

Red Hat OpenStack Platform uses components from other Red Hat products. For specific information pertaining to the support of these components, see: https://access.redhat.com/site/support/policy/updates/openstack/platform/.

To evaluate Red Hat OpenStack Platform, sign up at: http://www.redhat.com/openstack/.

Note

The Red Hat Enterprise Linux High Availability Add-On is available for Red Hat OpenStack Platform use cases. For more details about the add-on, see: http://www.redhat.com/products/enterprise-linux-add-ons/high-availability/. For details about the package versions to use in combination with Red Hat OpenStack Platform, see: https://access.redhat.com/site/solutions/509783.

1.2. Requirements

This beta version of Red Hat OpenStack Platform runs on the most recent fully supported release of Red Hat Enterprise Linux (version 8).

The Red Hat OpenStack Platform dashboard is a web-based interface that allows you to manage OpenStack resources and services. The dashboard for this release runs on the latest stable versions of the following web browsers:

  • Chrome
  • Mozilla Firefox
  • Mozilla Firefox ESR
  • Internet Explorer 11 and later (with Compatibility Mode disabled)
Note

Prior to deploying Red Hat OpenStack Platform, it is important to consider the characteristics of the available deployment methods. For more information, see Installing and Managing Red Hat OpenStack Platform.

1.3. Deployment Limits

For a list of deployment limits for Red Hat OpenStack Platform, see Deployment Limits for Red Hat OpenStack Platform.

1.4. Database Size Management

For recommended practices on maintaining the size of the MariaDB databases in your Red Hat OpenStack Platform environment, see Database Size Management for Red Hat Enterprise Linux OpenStack Platform.

1.5. Certified Drivers and Plug-ins

For a list of the certified drivers and plug-ins in Red Hat OpenStack Platform, see Component, Plug-In, and Driver Support in Red Hat OpenStack Platform.

1.6. Certified Guest Operating Systems

For a list of the certified guest operating systems in Red Hat OpenStack Platform, see Certified Guest Operating Systems in Red Hat OpenStack Platform and Red Hat Enterprise Virtualization.

1.7. Product Certification Catalog

For a list of the Red Hat Official Product Certification Catalog, see Product Certification Catalog.

1.8. Bare Metal Provisioning Operating Systems

For a list of the guest operating systems that can be installed on bare metal nodes in Red Hat OpenStack Platform through Bare Metal Provisioning (ironic), see Supported Operating Systems Deployable With Bare Metal Provisioning (ironic).

1.9. Hypervisor Support

This release of the Red Hat OpenStack Platform is supported only with the libvirt driver (using KVM as the hypervisor on Compute nodes).

This release of the Red Hat OpenStack Platform runs with Bare Metal Provisioning.

Bare Metal Provisioning has been fully supported since the release of Red Hat OpenStack Platform 7 (Kilo). Bare Metal Provisioning allows you to provision bare-metal machines using common technologies (such as PXE and IPMI) to cover a wide range of hardware while supporting pluggable drivers to allow the addition of vendor-specific functionality.

Red Hat does not provide support for other Compute virtualization drivers such as the deprecated VMware "direct-to-ESX" hypervisor or non-KVM libvirt hypervisors.

1.10. Content Delivery Network (CDN) Repositories

This section describes the repositories required to deploy Red Hat OpenStack Platform 16.0.

You can install Red Hat OpenStack Platform 16 through the Content Delivery Network (CDN) using subscription-manager. For more information, see Preparing the undercloud.

Warning

Some packages in the Red Hat OpenStack Platform software repositories conflict with packages provided by the Extra Packages for Enterprise Linux (EPEL) software repositories. The use of Red Hat OpenStack Platform on systems with the EPEL software repositories enabled is unsupported.

1.10.1. Undercloud repositories

Enable the following repositories for the installation and configuration of the undercloud.

Core repositories

The following table lists core repositories for installing the undercloud.

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)

rhel-8-for-x86_64-baseos-rpms

Base operating system repository for x86_64 systems.

Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)

rhel-8-for-x86_64-appstream-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs)

rhel-8-for-x86_64-highavailability-rpms

High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability.

Red Hat Ansible Engine 2.8 for RHEL 8 x86_64 (RPMs)

ansible-2.8-for-rhel-8-x86_64-rpms

Ansible Engine for Red Hat Enterprise Linux. Used to provide the latest version of Ansible.

Red Hat Satellite Tools for RHEL 8 Server RPMs x86_64

satellite-tools-6.5-for-rhel-8-x86_64-rpms

Tools for managing hosts with Red Hat Satellite 6.

Red Hat OpenStack Platform 16 Beta for RHEL 8 (RPMs)

openstack-beta-for-rhel-8-x86_64-rpms

Core Red Hat OpenStack Platform repository, which contains packages for Red Hat OpenStack Platform director.

Red Hat Fast Datapath for RHEL 8 (RPMS)

fast-datapath-for-rhel-8-x86_64-rpms

Provides Open vSwitch (OVS) packages for OpenStack Platform.

IBM POWER repositories

The following table lists repositories for Openstack Platform on POWER PC architecture. Use these repositories in place of equivalents in the Core repositories.

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux for IBM Power, little endian - BaseOS (RPMs)

rhel-8-for-ppc64le-baseos-rpms

Base operating system repository for ppc64le systems.

Red Hat Enterprise Linux 8 for IBM Power, little endian - AppStream (RPMs)

rhel-8-for-ppc64le-appstream-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat Enterprise Linux 8 for IBM Power, little endian - High Availability (RPMs)

rhel-8-for-ppc64le-highavailability-rpms

High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability.

Red Hat Ansible Engine 2.8 for RHEL 8 IBM Power, little endian (RPMs)

ansible-2.8-for-rhel-8-ppc64le-rpms

Ansible Engine for Red Hat Enterprise Linux. Used to provide the latest version of Ansible.

Red Hat OpenStack Platform 16 Beta for RHEL 8 (RPMs)

openstack-beta-for-rhel-8-ppc64le-rpms

Core Red Hat OpenStack Platform repository for ppc64le systems.

1.10.2. Overcloud repositories

You must enable the following repositories to install and configure the overcloud.

Core repositories

The following table lists core repositories for installing the overcloud.

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)

rhel-8-for-x86_64-baseos-rpms

Base operating system repository for x86_64 systems.

Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)

rhel-8-for-x86_64-appstream-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs)

rhel-8-for-x86_64-highavailability-rpms

High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability.

Red Hat Ansible Engine 2.8 for RHEL 8 x86_64 (RPMs)

ansible-2.8-for-rhel-8-x86_64-rpms

Ansible Engine for Red Hat Enterprise Linux. Used to provide the latest version of Ansible.

Advanced Virtualization for RHEL 8 x86_64 (RPMs)

advanced-virt-for-rhel-8-x86_64-rpms

Provides virtualization packages for OpenStack Platform.

Red Hat Satellite Tools for RHEL 8 Server RPMs x86_64

satellite-tools-6.5-for-rhel-8-x86_64-rpms

Tools for managing hosts with Red Hat Satellite 6.

Red Hat OpenStack Platform 16 Beta for RHEL 8 (RPMs)

openstack-beta-for-rhel-8-x86_64-rpms

Core Red Hat OpenStack Platform repository.

Red Hat Fast Datapath for RHEL 8 (RPMS)

fast-datapath-for-rhel-8-x86_64-rpms

Provides Open vSwitch (OVS) packages for OpenStack Platform.

Real Time repositories

The following table lists repositories for Real Time Compute (RTC) functionality.

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux 8 for x86_64 - Real Time (RPMs)

rhel-8-for-x86_64-rt-rpms

Repository for Real Time KVM (RT-KVM). Contains packages to enable the real time kernel. This repository should be enabled for all Compute nodes targeted for RT-KVM. NOTE: You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU before you can access this repository.

Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV (RPMs)

rhel-8-for-x86_64-nfv-rpms

Repository for Real Time KVM (RT-KVM) for NFV. Contains packages to enable the real time kernel. This repository should be enabled for all NFV Compute nodes targeted for RT-KVM. NOTE: You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU before you can access this repository.

IBM POWER repositories

The following table lists repositories for Openstack Platform on POWER PC architecture. Use these repositories in place of equivalents in the Core repositories.

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux for IBM Power, little endian - BaseOS (RPMs)

rhel-8-for-ppc64le-baseos-rpms

Base operating system repository for ppc64le systems.

Red Hat Enterprise Linux 8 for IBM Power, little endian - AppStream (RPMs)

rhel-8-for-ppc64le-appstream-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat Enterprise Linux 8 for IBM Power, little endian - High Availability (RPMs)

rhel-8-for-ppc64le-highavailability-rpms

High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability.

Red Hat Ansible Engine 2.8 for RHEL 8 IBM Power, little endian (RPMs)

ansible-2.8-for-rhel-8-ppc64le-rpms

Ansible Engine for Red Hat Enterprise Linux. Used to provide the latest version of Ansible.

Red Hat OpenStack Platform 16 Beta for RHEL 8 (RPMs)

openstack-beta-for-rhel-8-ppc64le-rpms

Core Red Hat OpenStack Platform repository for ppc64le systems.

1.11. Product Support

Available resources include:

Customer Portal

The Red Hat Customer Portal offers a wide range of resources to help guide you through planning, deploying, and maintaining your Red Hat OpenStack Platform deployment. Facilities available via the Customer Portal include:

  • Product documentation
  • Knowledge base articles and solutions
  • Technical briefs
  • Support case management

Access the Customer Portal at https://access.redhat.com/.

Mailing Lists

Red Hat provides these public mailing lists that are relevant to Red Hat OpenStack Platform users:

  • The rhsa-announce mailing list provides notification of the release of security fixes for all Red Hat products, including Red Hat OpenStack Platform.

Subscribe at https://www.redhat.com/mailman/listinfo/rhsa-announce.

Beta Release Support Limits

Updates to the beta content on the Content Delivery Network (CDN) will be determined at the discretion of the OpenStack product team. There are no plans nor guarantees for updates to the beta code on CDN. Also:

  • The beta code should not be used with production data or on production systems.
  • No guarantee of support is provided, but feedback and bug reports are welcome as are discussions with your account representative, partner contact, TAM, and so on.
  • Upgrades to or from a beta are not supported nor recommended.
  • No errata to the beta will be provided.

For more information, see the Red Hat OpenStack Platform 16.0 Beta Frequently Asked Questions (FAQ) at https://access.redhat.com/articles/4261791.

Chapter 2. Top New Features

This section provides an overview of the top new features in this release of Red Hat OpenStack Platform.

2.1. Compute

This section outlines the top new features for the Compute service.

  • SR-IOV live migration
  • Live migration of pinned instances
  • Bandwidth-aware scheduling
  • Colocation of pinned and floating instances on a single host
  • Multi-cell deployments using Cells V2

2.2. Networking

This section outlines the top new features for the Networking service.

Octavia OVN L4 LB driver
The Red Hat OpenStack Platform load balancing service (Octavia) now supports the layer 4 driver for Open Virtual Network (OVN).
OVN TLS/SSL support
Red Hat OpenStack Platform now supports the encryption of internal API traffic for OVN using Transport Layer Security (TLS).
OVN deployment over IPv6
Red Hat OpenStack Platform now supports deploying OVN on an IPv6 network.

2.3. Storage

This section outlines the top new features for the Storage service.

Deprecation of Data Processing service (sahara)
In Red Hat OpenStack Platform 16, the Data Processing service (sahara) is deprecated. In a future RHOSP release, the Data Processing service will be removed.
Simultaneously attach a volume to multiple instances
In Red Hat OpenStack Platform 16, if the back end driver supports it, you can now simultaneously attach a volume to multiple instances for both the Block Storage service (cinder) and the Compute service (nova). This feature addresses the use case for clustered application workloads that typically requires active/active or active/standby scenarios.
Image Service - Deploy glance-cache on far-edge nodes
This feature enables deploying Image service (glance) caching at the edge. This feature improves the boot time for instances and decreases the bandwidth usage between core and edge sites, which is useful in medium to large edge sites to avoid using compute nodes to fetch images from the core site.

2.4. Ceph Storage

This section outlines the top new features for Ceph Storage.

Red Hat Ceph Storage Upgrade
To maintain compatibility with Red Hat Enterprise Linux 8, Red Hat OpenStack Platform 16 Beta director deploys Red Hat Ceph Storage 4 Beta. You can use Red Hat OpenStack Platform 16 running on RHEL8 to connect to a preexisting external Red Hat Ceph Storage 3 cluster running on RHEL7.

2.5. Network Functions Virtualization

This section outlines the top new features for Network Functions Virtualization (NFV).

Live migration for instances with SR-IOV (Single root I/O virtualization)
Instances configured with SR-IOV interfaces can now be live migrated. For direct mode SR-IOV interfaces, this operation will incur some network downtime as the interface is detached and reattached as part of the migration. This is not an issue for indirect mode SR-IOV interfaces.

2.6. Technology Previews

This section outlines features that are in technology preview in Red Hat OpenStack Platform 16.

Note

For more information on the support scope for features marked as technology previews, see Technology Preview Features Support Scope.

2.6.1. New Technology Previews

Deploy and manage multiple overclouds from a single undercloud

This release includes the capability to deploy multiple overclouds from a single undercloud.

  • Interact with a single undercloud to manage multiple distinct overclouds.
  • Switch context on the undercloud to interact with different overclouds.
  • Reduce redundant management nodes.
Undercloud minion
This release contains the ability to install undercloud minions. An undercloud minion provides additional heat-engine and ironic-conductor services on a separate host. These additional services support the undercloud with orchestration and provisioning operations. The distribution of undercloud operations across multiple hosts provides more resources to run an overcloud deployment, which can result in potentially faster and larger deployments.
In-flight validations

Director provides a new set of commands to list validations and run validations against the undercloud and overcloud. These commands are:

  • openstack tripleo validator list
  • openstack tripleo validator run

These commands interact with a set of Ansible-based tests from the openstack-tripleo-validations package. To enable this feature, set the enable_validations parameter to true in the undercloud.conf file and run openstack undercloud install.

New director feature to create an active-active configuration for Block Storage service

With Red Hat OpenStack Platform director, you can now deploy the Block Storage service (cinder) in an active-active configuration on Ceph RADOS Block Device (RBD) back ends only, if the back end driver supports this configuration.

The new cinder-volume-active-active.yaml file defines the active-active cluster name by assigning a value to the CinderVolumeCluster parameter. CinderVolumeCluster is a global Block Storage parameter, and prevents you from including clustered (active-active) and non-clustered back ends in the same deployment.

The cinder-volume-active-active.yaml file causes director to use the non-Pacemaker, cinder-volume Orchestration service template, and adds the etcd service to your Red Hat OpenStack Platform deployment as a distributed lock manager (DLM).

New director parameter for configuring Block Storage service availability zones
With Red Hat OpenStack Platform director, you can now configure different availability zones for Block Storage service (cinder) volume back ends. Director has a new parameter, CinderXXXAvailabilityZone, where XXX is associated with a specific back end.
New Redfish BIOS management interface for Bare Metal service

Red Hat OpenStack Platform Bare Metal service (ironic) now has a BIOS management interface, with which you can inspect and modify a device’s BIOS configuration.

In Red Hat OpenStack Platform 16, the Bare Metal service supports BIOS management capabilities for data center devices that are Redfish API compliant. The Bare Metal service implements Redfish calls through the Python library, Sushy.

Using separate heat stacks

You can now use separate heat stacks for different types of nodes. For example, you can have a stack just for the control plane, a stack for Compute nodes, and another stack for Hyper Converged Infrastructure (HCI) nodes. This approach has the following benefits:

  • Management: You can modify and manage your nodes without needing to make changes to the control plane stack.
  • Scaling out: You do not need to update all nodes just to add more Compute or Storage nodes; the separate heat stack means that these operations are isolated to selected node types.
  • Edge sites: You can segment an edge site within its own heat stack, thereby reducing network and management dependencies on the central data center. The edge site must have its own Availability Zone for its Compute and Storage nodes.
Deploying multiple Ceph clusters
You can use director to deploy multiple Ceph clusters, either on nodes dedicated to running Ceph or hyper-converged, using separate heat stacks for each cluster. For edge sites, you can deploy a hyper-converged infrastructure (HCI) stack that uses Compute and Ceph storage on the same node. For example, you might deploy two edge stacks named HCI-01 and HCI-02, each in their own availability zone. As a result, each edge stack has its own Ceph cluster and Compute services.
New Compute (nova) configuration added to enable memoryBacking source type file, with shared access

A new Compute (nova) parameter is available, QemuMemoryBackingDir, which specifies the directory in which to store the memory backing file when a libvirt memoryBacking element is configured with source type="file" and access mode="shared".

Note: The memoryBacking element is only available from libvirt 4.0.0 and QEMU 2.6.0.

Support added for templated cell mapping URLs

Director now provides cell mapping URLs for the database and message queue URLs as templates, by using variables to represent values such as the user name and password. The following properties, defined in the Compute configuration file of the director, specify the variable values:

  • database_connection: [database]/connection
  • transport_url: [DEFAULT]/transport_url
Support added for configuring the maximum number of disk devices that can be attached to a single instance

A new Compute (nova) parameter is available, max_disk_devices_to_attach, which specifies the maximum number of disk devices that can be attached to a single instance. The default is unlimited (-1). The following example illustrates how to change the value of max_disk_devices_to_attach to "30":

parameter_defaults:
    ComputeExtraConfig:
      nova::config::nova_config:
        [compute]/max_disk_devices_to_attach:
            value: '"30"'
Fencing for RedFish API
Fencing is now available with Pacemaker for RedFish API.

Chapter 3. Release Information

These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform. Notes for updates released during the support lifecycle of this Red Hat OpenStack Platform release will appear in the advisory text associated with each update.

3.1. Red Hat OpenStack Platform 16 Beta

These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.

3.1.1. Enhancements

This release of Red Hat OpenStack Platform features the following enhancements:

BZ#1222414

With this enhancement, support for live migration of instances with a NUMA topology has been added. Previously, this action was disabled by default. It could be enabled using the '[workarounds] enable_numa_live_migration' config option, but this defaulted to False because live migrating such instances resulted in them being moved to the destination host without updating any of the underlying NUMA guest-to-host mappings or the resource usage. With the new NUMA-aware live migration feature, if the instance cannot fit on the destination, the live migration will be attempted on an alternate destination if the request is set up to have alternates. If the instance can fit on the destination, the NUMA guest-to-host mappings will be re-calculated to reflect its new host, and its resource usage updated.

BZ#1360970

With this enhancement, support for live migration of instances with attached SR-IOV-based neutron interfaces has been added. Neutron SR-IOV interfaces can be grouped into two categories: direct mode and indirect mode. Direct mode SR-IOV interfaces are directly attached to the guest and exposed to the guest OS. Indirect mode SR-IOV interfaces have a software interface, for example, a macvtap, between the guest and the SR-IOV device. This feature enables transparent live migration for instances with indirect mode SR-IOV devices. As there is no generic way to copy hardware state during a live migration, direct mode migration is not transparent to the guest. For direct mode interfaces, mimic the workflow already in place for suspend and resume. For example, with SR-IOV devices, detach the direct mode interfaces before migration and re-attach them after the migration. As a result, instances with direct mode SR-IOV ports lose network connectivity during a migration unless a bond with a live migratable interface is created within the guest.
Previously, it was not possible to live migrate instances with SR-IOV-based network interfaces. This was problematic as live migration is frequently used for host maintenance and similar actions. Previously, the instance had to be cold migrated which involves downtime for the guest.
This enhancement results in the live migration of instances with SR-IOV-based network interfaces.

BZ#1653834

This enhancement adds the boolean parameter `NovaComputeEnableKsm`. The parameter enables the ksm and ksmtuned service on compute nodes. You can set `NovaComputeEnableKsm` for each Compute role. Default: `False`.

BZ#1666973

This feature allows custom settings in sections other than [global] in ceph.conf.

BZ#1696663

This update allows you to configure NUMA affinity for most neutron networks. This helps you ensure that instances are placed on the same host NUMA node as the NIC providing external connectivity to the vSwitch. You can configure NUMA affinity on networks that use:
--'provider:network_type' of 'flat' or 'vlan' and a 'provider:physical_network' (L2 networks)
or
--'provider:network_type' of 'vxlan', 'gre' or 'geneve' (L3 networks).

BZ#1696717

This update allows you to deploy Ganesha NFS via OSP Director and configure it to use an external Ceph cluster that is not managed by Director.

Manila can use Ganesha to serve NFS shares when its backend is Ceph. Before this update, when the Ceph cluster was not deployed and managed via OSP Director it was necessary to provision also Ganesha externally

Now you can deploy Ganesha NFS via OSP Director to to serve Manila shares when the Ceph cluster itself is external and not managed by Director.

3.1.2. Technology Preview

The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to https://access.redhat.com/support/offerings/techpreview/.

BZ#1288155

This feature is Tech Preview in RHOSP 16 beta.

BZ#1459187

This feature is Tech Preview in RHOSP 16 beta.

BZ#1474394

This feature is Tech Preview in RHOSP 16 beta.

BZ#1518222

This feature is Tech Preview in RHOSP 16 beta.

BZ#1545700

This feature is Tech Preview in RHOSP 16 beta.

BZ#1593828

This feature is Tech Preview in RHOSP 16 beta.

BZ#1600967

This feature is Tech Preview in RHOSP 16 beta.

BZ#1621701

This feature is Tech Preview in RHOSP 16 beta.

BZ#1622233

This feature is Tech Preview in RHOSP 16 beta.

BZ#1623152

This feature is Tech Preview in RHOSP 16 beta.

BZ#1626260

Feature:

This feature implements a new composable service as it's been done with the others Ceph components to get the ceph dashboard deployment automated by Director.

Reason:

As ceph-dashboard is available on Ceph, this feature implements a user scenario in which the dashboard is deployed along with the other ceph components using Director.

Result:

This feature allows operators to include a ceph-dashboard template containing all the parameters (and resources) needed to enable the dashboard when the ceph-ansible playbook execution occur (having OS::TripleO::Services::CephGrafana in the same way as we have the OS::TripleO::Services::CephOSD).

BZ#1628061

This feature is Tech Preview in RHOSP 16 beta.

BZ#1700083

This feature is Tech Preview in RHOSP 16 beta.

BZ#1710089

Director has added the `overcloud undercloud minion install` command that you can use to configure an additional host to augment the Undercloud services.

BZ#1710092

Director now provides the ability to deploy an additional node that you can use to add additional heat-engine resources for deployment related actions.

3.1.3. Known Issues

These known issues exist in Red Hat OpenStack Platform at this time:

BZ#1328124

As a result of the issue described in https://bugzilla.redhat.com/1775283 for OSP16 beta,  cell deployment fails, with the following error message:
"<13>Nov 21 14:13:48 puppet-user: Error: Function lookup() did not find a value for the name 'ovn_dbs_vip'"
As a workaround for OSP16 beta,  create an export file:
(undercloud) $ openstack overcloud cell export cell1 -f -o /home/stack/cell1/cell-input.yaml
In the export file, move all information listed under AllNodesExtraMapData to GlobalConfigExtraMapData.
As a result, the required input data is available on the cell hosts.

BZ#1647005

Nova-compute ironic driver tries to update BM node while the node is being cleaned up. The cleaning takes approximately five minutes but nova-compute attempts to update the node for approximately two minutes. After timeout, nova-compute stops and puts nova instance into ERROR state.

As a workaround, set the following configuration option for nova-compute service:

 [ironic]
 api_max_retries = 180

As a result, nova-compute continues to attempt to update BM node longer and eventually succeeds.

BZ#1734301

Currently, the OVN load balancer does not open new connections when fetching data from members. The load balancer modifies the destination address and destination port and sends request packets to the members.
As a result, it is not possible to define an IPv6 member while using an IPv4 load balancer address and vice versa.
There is currently no workaround for this issue.

BZ#1777898

Currently, live migration fails if the internal api hostname reference is missing in the generated ssh_known_hosts file on the compute hosts created by tripleo-ssh-known-hosts role. The role is only run once and if the first node in the inventory is a CephStorage node, the check for role_networks misses the internal api network.
The workaround is to add the internal api network to the CephStorage role in the used role file for the deployment, even if the ceph storage node does not use the internal api network.

[root@undercloud-0 openstack-tripleo-heat-templates]# git diff
diff --git a/roles_data.yaml b/roles_data.yaml
index fc9dcea..e4ba3ed 100644
--- a/roles_data.yaml
+++ b/roles_data.yaml
@@ -371,6 +371,8 @@
   description: |
     Ceph OSD Storage node role
   networks:
+    InternalApi:
+      subnet: internal_api_subnet
     Storage:
       subnet: storage_subnet
     StorageMgmt:

BZ#1783044

The ceph-ansible package is not available, so director is unable to configure the overcloud to use Ceph.

Workaround: Install ceph-ansible on the undercloud as described in  https://access.redhat.com/solutions/4654631

3.1.4. Removed Functionality

BZ#1631508

In Red Hat OpenStack version 16, the controller-v6.yaml file is removed. The routes that were defined in controller-v6.yaml are now defined in controller.yaml. (The controller.yaml file is a NIC configuration file that is rendered from values set in roles_data.yaml.)

Previous versions of Red Hat OpenStack director, included two routes: one for IPv6 on the External network (default) and one for IPv4 on the Control Plane.

To use both default routes, make sure that the controller definition in roles_data.yaml contains both networks in default_route_networks (for example, default_route_networks: ['External', 'ControlPlane']).

BZ#1712981

The Data Processing service (sahara) is deprecated in RHOSP 15 and removed in RHOSP 16. Red Hat continues to offer support for the Data Processing service in RHOSP versions 13 and 15.