Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 3. Release Information

These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.
Notes for updates released during the support lifecycle of this Red Hat OpenStack Platform release will appear in the advisory text associated with each update.

3.1. Red Hat OpenStack Platform 12 GA

These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.

3.1.1. Enhancements

This release of Red Hat OpenStack Platform features the following enhancements:
BZ#1117883
This update provides the Docker image for the Keystone service.
BZ#1276147
This update adds support to OpenStack Bare Metal (ironic) for the Emulex hardware iSCSI (be2iscsi) ramdisk.
BZ#1277652
This updates adds new commands that allow you to determine host to IP mapping from the undercloud without needing to access the hosts directly.

You can show which IP addresses are assigned to which host and to which port with the following command: openstack stack output show overcloud HostsEntry -c output_value -f value

Use grep to filter the results for a specific host.

You can also map the hosts to bare metal nodes with the following command: openstack baremetal node list --fields uuid name instance_info -f json
BZ#1293435
Uploading to and downloading from Cinder volumes with Glance is now supported with the Cinder backend driver.

Note: This update does not include support for Ceph RBD. Use the Ceph backend driver to perform RBD operations on Ceph volumes.
BZ#1301549
The update adds a new validation to check the overcloud's network environment. This helps avoid any conflicts with IP addresses, VLANs, and allocation pool when deploying your overcloud.
BZ#1334545
You can now set QoS IOPS limits that scale per GB size of the volume with the options "total_iops_sec_per_gb", "read_iops_sec_per_gb", and "write_iops_sec_per_gb". 

For example, if you set the total_iops_sec_per_gb=1000 option, you will get 1000 IOPS for a 1GB volume, 2000 IOPS for a 2GB volume, and so on.
BZ#1368512
The update adds a new validation to check the hardware resource on the undercloud before an deployment or upgrade. The validation ensures the undercloud meets the necessary disk space and memory requirements prior to a deployment or upgrade.
BZ#1383576
This update adds an action to "Manage Nodes" through the director UI. This action switches nodes to a "manageable" state so the director can perform introspection through the UI.
BZ#1406102
Director now supports the creation of custom networks during the deployment and update phases. These additional networks can be used for dedicated network controllers, Ironic baremetal nodes, system management, or to create separate networks for different roles.

A single data file ('network_data.yaml') manages the list of networks that will be deployed. The role definition process then assigns the networks to the required roles.
BZ#1430885
This update increases the granularity of the deployment progress bar. This is achieved with an increase in the nesting level that retrieves the stack resources. This provides more accurate progress of a deployment.
BZ#1434929
Previously, the OS_IMAGE_API_VERSION and the OS_VOLUME_API_VERSION environment variables were not set, which forced Glance and Cinder to fall back to the default API versions. For Cinder, this was the older v2 API.

With this update, the overcloudrc file now sets the environment variables to specify the API versions for Glance and Cinder.

3.1.2. Technology Preview

The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to https://access.redhat.com/support/offerings/techpreview/.
BZ#1300425
With the Manila service, you can now create shares within Consistency Groups to guarantee snapshot consistency across multiple shares. Driver vendors must report this capability and implement its functions to work according to the back end.

This feature is not recommended for production cloud environments, as it is still in its experimental stage.
BZ#1418433
Containerized deployment of the OpenStack File Share Service (manila) is available as a technology preview in this release. By default, Manila, Cinder, and Neutron will still be deployed on bare metal machines.
BZ#1513109
POWER-8 (ppc64le) Compute support is now available as a technology preview.

3.1.3. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat OpenStack Platform. You must take this information into account to ensure the best possible outcomes for your deployment.
BZ#1463355
When TLS everywhere is enabled, the HAProxy stats interface will also use TLS. As a result, you will need to access the interface though the individual node's ctlplane address, which is either the actual IP address or the FQDN (using the convention <node name>.ctlplane.<domain>, for example, overcloud-controller-0.ctlplane.example.com). This setting can be configured by the `CloudNameCtlplane` parameter in `tripleo-heat-templates`. Note that you can still use the `haproxy_stats_certificate` parameter from the HAproxy class, and it will take precedence if set.

3.1.4. Known Issues

These known issues exist in Red Hat OpenStack Platform at this time:
BZ#1552234
There is currently a known issue where you cannot use ACLs to make a container public for anonymous access. This issue arises when sending `POST` operations to Swift that specify a '*' value in the `X-Container-Read` or `X-Container-Write` settings.
BZ#1321179
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.
BZ#1384845
When an overcloud image is shipped with 'tuned' version lower than 2.7.1-4, you should apply a manual update of the 'tuned' package to the overcloud image. If the 'tuned' version is equal to 2.7.1-4 or higher, you should provide the list of the core to 'tuned' and activate the profile, for example:

# echo "isolated_cores=2,4,6,8,10,12,14,18,20,22,24,26,28,30" >> /etc/tuned/cpu-partitioning-variables.conf
# tuned-adm profile cpu-partitioning

This is a known issue until the 'tuned' packages are available in the Centos repositories.
BZ#1385347
The '--controller-count' option for the 'openstack overcloud deploy' command sets the 'NeutronDhcpAgentsPerNetwork' parameter. When deploying a custom Networker role that hosts the OpenStack Networking (neutron) DHCP Agent, the 'NeutronDhcpAgentsPerNetwork' parameter might not set to the correct value. As a workaround, set the 'NeutronDhcpAgentsPerNetwork' parameter manually using an environment file. For example:

----
parameter_defaults:
  NeutronDhcpAgentsPerNetwork: 3
----

This sets 'NeutronDhcpAgentsPerNetwork' to the correct value.
BZ#1486995
When using an NFS back end for the Image service (glance), attempting to create an image will fail with a permission error. This is because the user ID on the host and container differ, and also because puppet cannot mount the NFS endpoint successfully on the container.
BZ#1487920
Encrypted volumes cannot attach correctly to instances in containerized environments. The Compute service runs "cryptsetup luksOpen", which waits for the udev device creation process to finish. This process does not actually finish, which causes the command to hang.

Workaround: Restart the containerized Compute service with the docker option "--ipc=host".
BZ#1508438
For containerized OpenStack services, configuration files are now installed in each container. However, some OpenStack services are not containerized yet, and configuration files for those services are still installed on the bare metal nodes. 

If you need to access or modify configuration files for containerized services, use /var/log/config-data/<container name>/<config path>. For services that are not containerized yet, use /etc/<service>.
BZ#1516911
In HP DL 360/380 Gen9, the DIMM format does not match the regex query.

In order to PASS on this, you must cherry-pick the HW patches in comment #2.
BZ#1519057
There is currently a known issue with LDAP integration for Red Hat OpenStack Platform. At present, the `keystone_domain_confg` tag is missing from `keystone.yaml`, preventing Puppet from properly applying the required configuration files. Consequently, LDAP integration with Red Hat OpenStack Platform will not be properly configured. As a workaround, you will need to manually edit `keystone.yaml` and add the missing tag. There are two ways to do this:

1. Edit the the file directly:
  a. Log into the undercloud as the stack user.
  b. Open the keystone.yaml in the editor of your choice. For example:
       `sudo vi /usr/share/openstack-tripleo-heat-templates/docker/services/keystone.yaml`
  c. Append the missing puppet tag, `keystone_domain_confg`, to line 94. For example:
      `puppet_tags: keystone_config`
        Changes to:
      `puppet_tags: keystone_config,keystone_domain_confg`
  d. Save and close `keystone.yaml`.
  e. Verify you see the missing tag in the `keystone.yaml` file. The following command should return '1':
    `cat /usr/share/openstack-tripleo-heat-templates/docker/sercies/keystone.yaml | grep 'puppet_tags: keystone_config,keystone_domain_config' | wc -l`

2. Or, use sed to edit the file inline:
  a. Login to the undercloud as the stack user.
  b. Run the following command to add the missing puppet tag:
     `sed -i 's/puppet_tags\: keystone_config/puppet_tags\: keystone_config,keystone_domain_config/' /usr/share/openstack-tripleo-heat-templates/docker/services/keystone.yaml`
  c. Verify you see the missing tag in the keystone.yaml file The following command should return '1':
    `cat /usr/share/openstack-tripleo-heat-templates/docker/sercies/keystone.yaml | grep 'puppet_tags: keystone_config,keystone_domain_config' | wc -l`
BZ#1519536
You must manually discover the latest Docker image tags for current container images that are stored in Red Hat Satellite. For more information, see the Red Hat Satellite documentation: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_container_images#managing_container_images_with_docker_tags
BZ#1520004
It is only possible to deploy Ceph storage servers if their disk devices are homogeneous.
BZ#1522872
OpenStack Compute (nova) provides both versioned and unversioned notifications in RabbitMQ. However, due to the lack of consumers for versioned notifications, the  versioned notifications queue grows quickly and causes RabbitMQ failures. This can hinder Compute operations such as instance creation and flavor creation. Red Hat is currently implementing fixes for RabbitMQ and director:

https://bugzilla.redhat.com/show_bug.cgi?id=1478274
https://bugzilla.redhat.com/show_bug.cgi?id=1488499

The following article provides a workaround until Red Hat releases patches for this issue:

https://access.redhat.com/solutions/3139721
BZ#1525520
For deployments using OVN as the ML2 mechanism driver, only nodes with connectivity to the external networks are eligible to schedule the router gateway ports on them. However, there is currently a known issue that will make all nodes as eligible, which becomes a problem when the Compute nodes do not have external connectivity. As a result, if a router gateway port is scheduled on Compute nodes without external connectivity, ingress and egress connections for the external networks will not work; in which case the router gateway port has to be rescheduled to a controller node. As a workaround, you can provide connectivity on all your compute nodes, or you can consider deleting NeutronBridgeMappings, or set it to datacentre:br-ex. For more information, see https://bugzilla.redhat.com/show_bug.cgi?id=1525520 and https://bugzilla.redhat.com/show_bug.cgi?id=1510879.

3.1.5. Deprecated Functionality

The items in this section are either no longer supported, or will no longer be supported in a future release.
BZ#1417221
The Panko service is officially deprecated in OpenStack version 12. Support for panko will be limited to usage from cloudforms only. We do not recommend using panko outside of the cloudforms use case.
BZ#1427719
VPN-as-a-Service (VPNaaS) VPNaaS was deprecated in Red Hat OpenStack Platform 11, and has now been removed in Red Hat OpenStack Platform 12.
BZ#1489801
MongoDB is no longer used by Red Hat OpenStack Platform. Previously, it was used for Telemetry (which now uses Gnocchi) and Zaqar on the undercloud (which is moving to Redis). As a result, 'mongodb', 'puppet-mongodb', and 'v8' are no longer included.
BZ#1510716
File injection from the Compute REST API. This will continue to be supported for now if using API microversion < 2.56. However, Compute will eventually remove this functionality. The changes are as follows:
                      
* Deprecate the 'personality' parameter from the 'POST /servers' create server API and the 'POST /servers/{server_id}/action' rebuild server API. Specifying the 'personality' parameter in the request body to either of these APIs will result in a '400 Bad Request' error response. 

* Add support to pass 'user_data' to the rebuild server API as a result of this change.

* Stop returning 'maxPersonality' and 'maxPersonalitySize' response values from the 'GET /limits' API.

* Stop accepting and returning 'injected_files', 'injected_file_path_bytes', 'injected_file_content_bytes' from the 'os-quota-sets' and 'os-quota-class-sets' APIs.

* Removes Compute API extensions including server extensions, flavor extensions and image extensions.. The extensions code have their own policy and there is no option to enable or disable these extensions in the API, leading to interoperability issues. 

* Removes the 'hide_server_address_states' configuration option which allows you to configure the server states to hide the address and the hide server address policy. Also, removes the 'os_compute_api:os-hide-server-addresses' policy as it is no longer necessary.