Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 3. Release Information

These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.
Notes for updates released during the support lifecycle of this Red Hat OpenStack Platform release will appear in the advisory text associated with each update.

3.1. Red Hat OpenStack Platform 10 GA

These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.

3.1.1. Enhancements

This release of Red Hat OpenStack Platform features the following enhancements:
BZ#1188175
This enhancement adds support for virtual device role tagging. This was added because an instance's operating system may need extra information about the virtual devices it is running on. For example, in an instance with multiple virtual network interfaces, the guest operating system needs to distinguish between their intended usage in order to provision them accordingly.
With this update, virtual device role tagging allows users to tag virtual devices when creating an instance. Those tags are then presented to the instance (along with other device metadata) using the metadata API, and through the config drive (if enabled). For more information, see the chapter `Use Tagging for Virtual Device Identification` in the Red Hat OpenStack Platform 10 Networking Guide: https://access.redhat.com/documentation/en/red-hat-openstack-platform/
BZ#1189551
This update adds the `real time` feature, which provides stronger guarantees for worst-case scheduler latency for vCPUs. This update assists tenants that need to run workloads concerned with CPU execution latency, and that require the guarantees offered by a real time KVM guest configuration.
BZ#1198602
This enhancement allows the `admin` user to view a list of the floating IPs allocated to instances, using the admin console. This list spans all projects in the deployment.
Previously, this information was only available from the command-line.
BZ#1233920
This enhancement adds support for virtual device role tagging. This was added because an instance's operating system may need extra information about the virtual devices it is running on. For example, in an instance with multiple virtual network interfaces, the guest operating system needs to distinguish between their intended usage in order to provision them accordingly.
With this update, virtual device role tagging allows users to tag virtual devices when creating an instance. Those tags are then presented to the instance (along with other device metadata) using the metadata API, and through the config drive (if enabled). For more information, see the chapter `Use Tagging for Virtual Device Identification` in the Red Hat OpenStack Platform 10 Networking Guide: https://access.redhat.com/documentation/en/red-hat-openstack-platform/
BZ#1249836
With the 'openstack baremetal' utility, you can now specify specific images during boot configuration. Specifically, you can now use the '--deploy-kernel' and '--deploy-ramdisk' options to specify a kernel or ramdisk image, respectively.
BZ#1256850
The Telemetry API (ceilometer-api) now uses apache-wsgi instead of eventlet. When upgrading to this release, ceilometer-api will be migrated accordingly.

This change provides greater flexibility for per-deployment performance and scaling adjustments, as well as straightforward use of SSL.
BZ#1262070
You can now use the director to configure Ceph RBD as a Block Storage backup target. This will allow you to deploy an overcloud where volumes are set to back up to a Ceph target. By default, volume backups will be stored in a Ceph pool called 'backups'.

Backup settings are configured in the following environment file (on the undercloud):

/usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml
BZ#1279554
Using the RBD backend driver (Ceph Storage) for OpenStack Compute (nova) ephemeral disks applies two additional settings to libvirt:

hw_disk_discard : unmap
disk_cachemodes : network=writeback

This allows reclaiming of unused blocks on the Ceph pool and caching of network writes, which improves the performance for OpenStack Compute ephemeral disks using the RBD driver.

Also see http://docs.ceph.com/docs/master/rbd/rbd-openstack/
BZ#1283336
Previously, in Red Hat Enterprise Linux OpenStack Platform 7, the networks that could be used on each role was fixed. Consequently, it was not possible to have a custom network topology with any network, on any role.
With this update, in Red Hat OpenStack Platform 8 and higher, any network may be assigned to any role.
As a result, custom network topologies are now possible, but the ports for each role will have to be customized. Review the `environments/network-isolation.yaml` file in `openstack-tripleo-heat-templates` to see how to enable ports for each role in a custom environment file or in `network-environment.yaml`.
BZ#1289502
With this release, the customer requires two factor authentication, to support better security for re-seller use case.
BZ#1290251
With this update, a new feature to enable connecting the overcloud to a monitoring infrastructure adds availability monitoring agents (sensu-client) to be deployed on the overcloud nodes. 

To enable the monitoring agents deployment, use the environment file '/usr/share/openstack/tripleo-heat-templates/environments/monitoring-environment.yaml' and fill in the following parameters in the configuration YAML file:

MonitoringRabbitHost: host where the RabbitMQ instance for monitoring purposes is running
MonitoringRabbitPort: port on which  the RabbitMQ instance for monitoring purposes is running
MonitoringRabbitUserName: username to connect to RabbitMQ instance
MonitoringRabbitPassword: password to connect to RabbitMQ instance
MonitoringRabbitVhost: RabbitMQ vhost used for monitoring purposes
BZ#1309460
You can now use the director to deploy Ceph RadosGW as your object storage gateway. To do so, include /usr/share/openstack-tripleo-heat-templates/environmens/ceph-radosgw.yaml in your overcloud deployment. When you use this heat template, the default Object Storage service (swift) will not be deployed.
BZ#1314080
With this enhancement, `heat-manage` now supports a `heat-manage reset_stack_status` subcommand. This was added to manage situations where `heat-engine` was unable to contact the database, causing any stacks that were in-progress to remain stuck due to outdated database information. When this occurred, administrators needed a way to reset the status to allow these stacks to be updated again.
As a result, administrators can now use the `heat-manage reset_stack_status` command to reset a stuck stack.
BZ#1317669
This update includes a release file to identify the overcloud version deployed with OSP director. This gives a clear indication of the installed version and aids debugging. The overcloud-full image includes a new package (rhosp-release). Upgrades from older versions also install this RPM. All versions starting with OSP 10 will now have a release file. This only applies to Red Hat OpenStack Platform director-based installations. However, users can manually the install the rhosp-release package and achieve the same result.
BZ#1325680
Typically, the installation and configuration of OVS+DPDK in OpenStack is performed manually after overcloud deployment. This can be very challenging for the operator and tedious to do over a large number of Compute nodes. The installation of OVS+DPDK has now been automated in tripleo.  Identification of the hardware capabilities for DPDK were previously done manually,  and is now automated during introspection. This hardware detection also provides the operator with the data needed for configuring Heat templates. At present, it is not possible to have the co-existence of Compute nodes with DPDK-enabled hardware and without DPDK-enabled hardware.
The `ironic` Python Agent discovers the following hardware details and stores it in a swift blob:
* CPU flags for hugepages support - If pse exists then 2MB hugepages are supported If pdpe1gb exists then 1GB hugepages are supported
* CPU flags for IOMMU - If VT-d/svm exists, then IOMMU is supported, provided IOMMU support is enabled in BIOS.
* Compatible nics - compared with the list of NICs whitelisted for DPDK, as listed here http://dpdk.org/doc/nics

Nodes without any of the above-mentioned capabilities cannot be used for the Compute role with DPDK.

* Operator will have a provision to enable DPDK on Compute nodes. 
* The overcloud image for the nodes identified to be Compute-capable and having DPDK NICs, will have the OVS+DPDK package instead of OVS. It will also have packages `dpdk` and `driverctl`.
* The device names of the DPDK capable NIC's will be obtained from T-H-T. The PCI address of DPDK NIC needs to be identified from the device name. It is required for whitelisting the DPDK NICs during PCI probe.
* Hugepages needs to be enabled in the Compute nodes with DPDK. 
* CPU isolation needs to be done so that the CPU cores reserved for DPDK Poll Mode Drivers (PMD) are not used by the general kernel balancing, interrupt handling and scheduling algorithms.  
*  On each Compute node with a DPDK-enabled NIC, puppet will configure the DPDK_OPTIONS for whitelisted NICs, CPU mask, and number of memory channels for DPDK PMD. The DPDK_OPTIONS needs to be set in /etc/sysconfig/openvswitch.

`Os-net-config` performs the following steps:
* Associate the given interfaces with the dpdk drivers (default as vfio-pci driver) by identifying the pci address of the given interface. The driverctl will be used to bind the driver persistently.
* Understand the ovs_user_bridge and ovs_dpdk_port types and configure the ifcfg scripts accordingly.
* The “TYPE” ovs_user_bridge will translate to OVS type OVSUserBridge and based on this OVS will configure the datapath type to `netdev'.
* The “TYPE” ovs_dpdk_port will translate OVS type OVSDPDKPort and based on this OVS adds the port to the bridge with interface type as `dpdk'
* Understand the ovs_dpdk_bond and configure the ifcfg scripts accordingly.

On each Compute node with a DPDK-enabled NIC, puppet will perform the following steps:
* Enable OVS+DPDK in /etc/neutron/plugins/ml2/openvswitch_agent.ini [OVS] datapath_type=netdev vhostuser_socket_dir=/var/run/openvswitch
* Configure vhostuser ports in /var/run/openvswitch to be owned by qemu.

On each controller node, puppet will perform the following steps:
* Add NUMATopologyFilter to scheduler_default_filters in nova.conf.

As a result, the automation of the above-mentioned enhanced platform awareness has been completed, and verified by QA testing.
BZ#1325682
With this update, IP traffic can be managed by DSCP marking rules attached to QoS policies, which are in turn applied to networks and ports.
This was added because different sources of traffic may require different levels of prioritisation at the network level, especially when dealing with real-time information, or critical control data. As a result, the traffic from the specific ports and networks can be marked with DSCP flags. Note that only Open vSwitch is supported in this release.
BZ#1328830
This update adds support for multiple theme configurations. This was added to allow a user to change a theme dynamically, using the front end. Some use-cases include the ability to toggle between a light and dark theme, or the ability to turn on a high contrast theme for accessibility reasons.
As a result, users can now choose a theme at run time.
BZ#1337782
This release now features Composable Roles. TripleO can now be deployed in a composable way, allowing customers to select what services should run on each node. This, in turn, allows support for more complex use-cases.
BZ#1337783
Generic nodes can now be deployed during the hardware provisioning phase. These nodes are deployed with a generic operating system (namely, Red Hat Enterprise Linux); customers can then deploy additional services directly on these nodes.
BZ#1343130
The package that contains the ironic-python-agent image required the rhosp-director-images RPM as a dependency. However, you can use the ironic-python-agent image for general OpenStack Bare Metal (ironic) usage outside of the Red Hat OpenStack Platform director. This update changes the dependencies so that:

- The rhosp-director-images RPM requires the rhosp-director-images-ipa RPM
- The rhosp-director-images-ipa RPM does not require the rhosp-director-images RPM

Users now can install the ironic-python-agent image separately.
BZ#1346401
It is now possible to confine 'ceph-osd' instances with SELinux policies. In OSP10, new deployments have SELinux configured in 'enforcing' mode on the Ceph Storage nodes.
BZ#1347371
With this enhancement, RabbitMQ introduces the new HA feature of Queue Master distribution. One of the strategies is `min-masters`, which picks the node hosting the minimum number of masters.
This was added because of the possibility that one of the controllers may become unavailable, with  Queue Masters then located on available controllers during queue declarations. Once the lost controller becomes available again, masters of newly-declared queues are not placed with priority to the controller with an obviously lower number of queue masters, and consequently the distribution may be unbalanced, with one of the controllers under significantly higher load in the event of multiple fail-overs.
As a result, this enhancement spreads out the queues across controllers after a controller fail-over.
BZ#1353796
With this update, you can now add nodes manually using the UI.
BZ#1359192
With this update, the overcloud image includes the Red Hat Cloud Storage 2.0 version installed.
BZ#1366721
The Telemetry service (ceilometer) now uses gnocchi as its default meter dispatcher back end. Gnocchi is more scalable, and is more aligned to the future direction that the Telemetry service is facing.
BZ#1367678
This enhancement adds `NeutronOVSFirewallDriver`, a new parameter for configuring the Open vSwitch (OVS) firewall driver in Red Hat OpenStack Platform director.
This was added because the neutron OVS agent supports a new mechanism for implementing security groups: the 'openvswitch' firewall. `NeutronOVSFirewallDriver` allows users to directly control which implementation is used:
`hybrid` - configures neutron to use the old iptables/hybrid based implementation.
'openvswitch' - enables the new flow-based implementation. 
The new firewall driver includes higher performance and reduces the number of interfaces and bridges used to connect guests to the project network. As a result, users can more easily evaluate the new security group implementation.
BZ#1368218
With this update, you can now configure Object Storage service (swift) with additional raw disks by deploying the overcloud with an additional environment file, for example: 

  parameter_defaults:
    ExtraConfig:
      SwiftRawDisks:
        sdb:
          byte_size: 2048
          mnt_base_dir: /src/sdb
        sdc:
          byte_size: 2048

As a result, the Object Storage service is not limited by the local node `root` filesystem.
BZ#1371649
This enhancement updates the main script on `sahara-image-element` to only allow the creation of images for supported plugins. For example, you can use the following command to create a CDH 5.7 image using Red Hat Enterprise Linux 7:
----
>> ./diskimage-create/diskimage-create.sh -p cloudera -v 5.7

Usage: diskimage-create.sh
         [-p cloudera|mapr|ambari]
         [-v 5.5|5.7|2.3|2.4]
         [-r 5.1.0]
----
BZ#1381628
As described in https://bugs.launchpad.net/tripleo/+bug/1630247, the Sahara service in upstream Newton TripleO is now disabled by default. As part of the upgrade procedure from Red Hat OpenStack Platform 9 to Red Hat OpenStack Platform 10, the Sahara services are enabled/retained by default. If the operator decides they do not want Sahara after the upgrade, they need to include the provided `-e 'major-upgrade-remove-sahara.yaml'` environment file as part of the deployment command for the controller upgrade and converge steps. Note: this environment file must be specified last, especially for the converge step, but it could be done for both steps to avoid confusion. In this case, the Sahara services would not be restarted after the major upgrade.
This approach allows Sahara services to be properly handled during the OSP9 to OSP10 upgrade. As a result, Sahara services are retained as part of the upgrade. In addition, the operator can still explicitly disable Sahara, if necessary.
BZ#1383779
You can now use node-specific hiera to deploy Ceph storage nodes which do not have the same list of block devices. As a result, you can use node-specific hiera entries within the overcloud deployment's Heat templates to deploy non-similar OSD servers.

3.1.2. Technology Preview

The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to https://access.redhat.com/support/offerings/techpreview/.
BZ#1381227
This update contains the necessary components for testing the use of containers in OpenStack. This feature is available in this release as a Technology Preview.

3.1.3. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat OpenStack Platform. You must take this information into account to ensure the best possible outcomes for your deployment.
BZ#1377763
With Gnocchi 2.2, job dispatch is coordinated between controllers using Redis. As a result, you can expect improved processing of Telemetry measures.
BZ#1385368
To accommodate composable services, NFS mounts used as an Image Service (glance) back end are no longer managed by Pacemaker. As a result, the glance NFS back end parameter interface has changed: The new method is to use an environment file to enable the glance NFS backend. For example:
----
parameter_defaults:
  GlanceBackend: file
  GlanceNfsEnabled: true
  GlanceNfsShare: IP:/some/exported/path
----
Note: the GlanceNfsShare setting will vary depending on your deployment.
In addition, mount options can be customized using the `GlanceNfsOptions` parameter. If the Glance NFS backend was previously used in Red Hat OpenStack Platform 9, the environment file contents must be updated to match the Red Hat OpenStack Platform 10 format.

3.1.4. Known Issues

These known issues exist in Red Hat OpenStack Platform at this time:
BZ#1204259
Glance is not configured with glance.store.http.Store as a known_store in /etc/glance/glance.conf. This means the glance client can not create images with the --copy-from argument. These commands fail with a "400 Bad Request" error. As a workaround, edit /etc/glance/glance-api.conf, add glance.store.http.Store to the list in the "stores" configuration option, then restart the openstack-glance-api server. This enables successful creation of glance images with the --copy-from argument.
BZ#1239130
The director does not provide network validation before or during a deployment. This means a deployment with a bad network configuration can run for two hours with no output and can result in failure. A network validation script is currently in development and will be released in the future.
BZ#1241644
When openstack-cinder-volume uses an LVM backend and the Overcloud nodes reboot, the file-backed loopback device is not recreated. As a workaround, manually recreate the loopback device:

$ sudo losetup /dev/loop2 /var/lib/cinder/cinder-volumes

Then restart openstack-cinder-volume. Note that openstack-cinder-volume only runs on one node at a time in a high availability cluster of Overcloud Controller nodes. However, the loopback device should exist on all nodes.
BZ#1243306
Ephemeral storage is hard coded as true when using the NovaEnableRbdBackend parameter. This means NovaEnableRbdBackend instances cannot use cinder backed onto Ceph Storage. As a workaround, add the following to puppet/hieradata/compute.yaml:

nova::compute::rbd::ephemeral_storage: false

This disables ephemeral storage.
BZ#1245826
The "openstack overcloud update stack" command returns immediately despite ongoing operations in the background. The command seems to run forever because it's not interactive. In these situations, run the command with the "-i" flag. This prompts the user for any manual interaction needs.
BZ#1249210
A timing issue sometimes causes Overcloud neutron services to not  automatically start correctly. This means instances are not accessible. As a workaround, you can run the following command on the Controller node cluster:

$ sudo pcs resource debug-start neutron-l3-agent

Instances will work correctly.
BZ#1266565
Currently, certain setup steps require a SSH connection to the overcloud controllers, and will need to traverse VIPs to reach the Overcloud nodes. 
If your environment is using an external load balancer, then these steps are not likely to successfully connect. You can work around this issue by configuring the external load balancer to forward port 22. As a result, the SSH connection to the VIP will succeed.
BZ#1269005
In this release, RHEL OpenStack Platform director only supports a High Availability (HA) overcloud deployment using three controller nodes.
BZ#1274687
There is currently a known requirement that can arise when Director connects to the Public API to complete the final configuration post-deployment steps: The Undercloud node must have a route to the Public API, and it must be reachable on the standard OpenStack API ports and port 22 (SSH).
To prepare for this requirement, check that the Undercloud will be able to reach the External network on the controllers, as this network will be used for post-deployment tasks. As a result, the Undercloud can be expected to successfully connect to the Public API after deployment, and perform final configuration tasks. These tasks are required in order for the newly created deployment to be managed using the Admin account.
BZ#1282951
When deploying Red Hat OpenStack Platform director, the bare-metal nodes should be powered off, and the ironic `node-state` and `provisioning-state` must be correct.
For example, if ironic lists a node as "Available, powered-on", but the server is actually powered off, the node cannot be used for deployment.
As a result, you will need to ensure that the node state in ironic matches the actual node state. Use "ironic node-set-power-state <node> [on|off]" and/or "ironic node-set-provisioning-state <node> available" to make the power state in ironic match the real state of the server, and ensure that the nodes are marked `Available`.
As a result, once the state in ironic is correct, ironic will be able to correctly manage the power state and deploy to the nodes.
BZ#1293379
There is currently a known issue where network configuration changes can cause interface restarts, resulting in an interruption of network connectivity on overcloud nodes.
Consequently, the network interruption can cause outages in the pacemaker controller cluster, leading to nodes being fenced (if fencing is configured). As a result, tripleo-heat-templates is designed to not apply network configuration changes on overcloud updates. By not applying any network configuration changes, the unintended consequence of a cluster outage is avoided.
BZ#1293422
IBM x3550 M5 servers require firmware with minimum versions to work with Red Hat OpenStack Platform. 
Consequently, older firmware levels must be upgraded prior to deployment. Affected systems will need to upgrade to the following versions (or newer):
DSA 10.1, IMM2 1.72, UEFI 1.10, Bootcode NA, Broadcom GigE 17.0.4.4a 

After upgrading the firmware, deployment should proceed as expected.
BZ#1302081
Address ranges entered for the `AllocationPools` IPv6 networks and IP allocation pools must be input in a valid format according to RFC 5952. Consequently, invalid entries will result in an error.
As a result, IPv6 addresses should be entered in a valid format: Leading zeros can be omitted or entered in full, and repeating sequences of zeros may be replaced by "::".
For example, an IP address of "fd00:0001:0000:0000:00a1:00b2:00c3:0010" may be represented as: "fd00:1::a1:b2:c3:10", but not as: "fd00:01::0b2:0c3:10", because there are an invalid number of leading zeros (01, 0b2, 0c3). The field must be truncated of leading zeros or fully padded.
BZ#1312155
The controller_v6.yaml template contains a parameter for a Management network VLAN. This parameter is not supported in the current version of the director, and can be safely ignored along with any comments referring to the Management network. The Management network references do not need to be copied to any custom templates.

This parameter will be supported in a future version.
BZ#1323024
A puppet manifest bug incorrectly disables LVM partition automounting during the undercloud installation process. As a result, it is possible for undercloud hosts with partitions other than root and swap (activated on kernel command line) to only boot into an emergency shell.

There are several ways to work around this issue. Choose one from the following:

1. Remove the mountpoints manually from /etc/fstab. Doing so will prevent the issue from manifesting in all future cases. Other partitions could also be removed, and the space added to other partitions (like root or swap).

2. Configure the partitions to be activated in /etc/lvm.conf. Doing so will work until the next update/upgrade, when the undercloud installation is re-run.

3. Restrict initial deployment to only root and swap partitions. This will avoid the issue completely.
BZ#1368279
When using Red Hat Ceph as a back end for ephemeral storage, the Compute service does not calculate the amount of available storage correctly. Specifically, Compute simply adds up the amount of available storage without factoring in replication. This results in grossly overstated available storage, which in turn could cause unexpected storage oversubscription.

To determine the correct ephemeral storage capacity, query the Ceph service directly instead.
BZ#1372804
Previously, the Ceph Storage nodes use the local filesystem formatted with `ext4` as the back end for the `ceph-osd` service.

Note: Some `overcloud-full` images for Red Hat OpenStack Platform 9 (Mitaka) were created using `ext4` instead of `xfs`.

With the Jewel release, `ceph-osd` checks the maximum file name length allowed by the back end and refuses to start if the limit is lower than the one configured for Ceph itself. As a workaround, it is possible to verify the filesystem in use for `ceph-osd` by logging on the Ceph Storage nodes and using the following command:

  # df -l --output=fstype /var/lib/ceph/osd/ceph-$ID

Here, $ID is the OSD ID, for example: 

  # df -l --output=fstype /var/lib/ceph/osd/ceph-0

Note: A single Ceph Storage node might host multiple `ceph-osd` instances, in which case there will be multiple subdirectories in `/var/lib/ceph/osd/ for each instance.

If *any* of the OSD instances is backed by an `ext4` filesystem, it is necessary to configure Ceph to use shorter file names, which is possible by deploying/upgrading with an additional environment file, containing the following:

  parameter_defaults:
    ExtraConfig:
      ceph::profile::params::osd_max_object_name_len: 256
      ceph::profile::params::osd_max_object_namespace_len: 64

As a result, you can now verify if each and every `ceph-osd` instance is up and running after an upgrade from Red Hat OpenStack Platform 9 to Red Hat OpenStack Platform 10.
BZ#1383627
Nodes that are imported using "openstack baremetal import --json instackenv.json" should be powered off prior to attempting import. If the nodes are powered on, Ironic will not attempt to add the nodes or attempt introspection.
As a workaround, power off all overcloud nodes prior to running "openstack baremetal import --json instackenv.json".
As a result, if the nodes are powered off, the import should work successfully.
BZ#1383930
If using DHCP HA, the `NeutronDhcpAgentsPerNetwork` value should be set either equal to the number of dhcp-agents, or 3 (whichever is lower), using composable roles. If this is not done, the value will default to `ControllerCount` which may not be optimal as there may not be enough dhcp-agents running to satisfy spawning that many DHCP servers for each network.
BZ#1385034
When upgrading or deploying a Red Hat OpenStack Platform environment integrated with an external Ceph Storage Cluster from an earlier version (that is, Red Hat Ceph Storage 1.3), you need to enable backwards compatibility. To do so, add an environment file containing the following snippet to your upgrade/deployment:

parameter_defaults:
  ExtraConfig:
    ceph::conf::args:
      client/rbd_default_features:
        value: "1"
BZ#1391022
Red Hat Enterprise Linux 6 only contains GRUB Legacy, while OpenStack bare metal provisioning (ironic) only supports the installation of GRUB2. As a result, deploying a partition image with local boot will fail during the bootloader installation. 
As a workaround, if using RHEL 6 for bare metal instances, do not set boot_option to local in the flavor settings. You can also consider deploying a RHEL 6 whole disk image which already has GRUB Legacy installed.
BZ#1396308
When deploying or upgrading to a Red Hat OpenStack 10 environment that uses Ceph and dedicated blockstorage nodes for LVM, creating instances with attached volumes will no longer work. This is caused by a bug in the way the director configures the Block Storage service during upgrades. 

Specifically, the heat templates do not account by default for cases where Ceph and dedicated blockstorage nodes are configured together. As such, the director fails to define some required settings.

Note that LVM is not a suitable Block Storage back end in production, particularly in enterprise environments.

To work around this add an environment file to your upgrade/deployment that contains the following:

parameter_defaults:
  BlockStorageExtraConfig:
    tripleo::profile::base::cinder::volume::cinder_enable_iscsi_backend: true
    tripleo::profile::base::cinder::volume::cinder_enable_rbd_backend: false
BZ#1463059
When using Red Hat Ceph Storage as a back end for both Block Storage (cinder) volumes and backups, any attempt to perform an incremental backup will result in a full backup instead, without any warning.
BZ#1321179
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.

3.1.5. Deprecated Functionality

The items in this section are either no longer supported, or will no longer be supported in a future release.
BZ#1261539
Support for nova-network is deprecated as of Red Hat OpenStack Platform 9 and will be removed in a future release. When creating new environments, it is recommended to use OpenStack Networking (Neutron).
BZ#1404907
In accordance with the upstream project, the LBaaS v1 API has been removed. Red Hat OpenStack Platfom 10 supports only the LBaaS v2 API.