Chapter 6. Technical Notes

This chapter supplements the information contained in the text of Red Hat Enterprise Linux OpenStack Platform "Kilo" errata advisories released through the Content Delivery Network.

6.1. RHEA-2015:1548 — Red Hat Enterprise Linux OpenStack Platform Enhancement Advisory

The bugs contained in this section are addressed by advisory RHEA-2015:1548. Further information about this advisory is available at https://access.redhat.com/errata/RHEA-2015:1548.html.

6.1.1. crudini

BZ#1223624
Prior to this update, separate lock files where used while updating config files. In addition, directory entries were not correctly synchronized during an update.
As a result, a crash during this process could cause deadlock issues on subsequent config update attempts, or very occasionally result in corrupted (empty) config files.
This update adds more robust locking and synchronization within the 'crudini' utility. The result is that config file updates are now more robust during system crash events.

6.1.2. mariadb-galera

BZ#1211088
This rebase package includes a notable fix under version 5.5.42:
* An issue was resolved whereby INSERT statements that use auto-incrementing primary keys could fail with a "DUPLICATE PRIMARY KEY" error on an otherwise working Galera node, if a different Galera node that was also handling INSERT statements on that same table was recently taken out of the cluster. The issue would cause OpenStack applications to temporarily fail to create new records while a Galera failover operation was in-progress.

6.1.3. openstack-ceilometer

BZ#1232163
Previous versions of 'alarm-history' did not give an indication of when the severity of a given alarm was changed (for example, from 'low' to 'critical'); instead a change was indicated without any detail given of what the change was.
This update addresses this issue with a code update that displays severity changes.
As a result, 'alarm-history' now displays severity changes in the output of alarm-history.
BZ#1240532
Previously, when a ceilometer polling extension could not be loaded, an ERROR message was logged. This was misleading in cases where the failure to load a module was the expected outcome, such as when an extension was optional or its dependent modules were not available. Now, the log messages have been changed to WARN level to make it clear that there is no serious fault.

6.1.4. openstack-cinder

BZ#1133175
This update adds extended volume manage and unmanage support for NetApp Cmode and 7mode iSCSI drivers. This provides new functionality when using these drivers.
BZ#1133177
With this update, a new feature implements support to manage/unmanage volumes for the NetApp e-series driver. You can now use the '--source-name' parameter as the mandatory input for volumes not under the Block Storage management.
BZ#1156682
This update adds NFS back-ends for the cinder-backup service. This now allows back up of volumes to an NFS storage back end.
BZ#1159142
This update adds functionality to 'cinder-manage db' to safely purge old "deleted" data from the Cinder database. This reduces database space usage and improves database performance.
BZ#1200986
Prior to this update, SQLAlchemy objects were incorrectly shared between multiple 'cinder-volume' processes. 
Consequently, SQLAlchemy connections would fail when using a Block Storage multi-backend, resulting in database-related errors in the volume service.
This fix re-initializes SQLAlchemy connections when forking 'cinder-volume' child processes. 
As a result multi-backend now works as expected.
BZ#1208767
In the previous version, creating volume from an image failed. On a virtual disk with a high number of sectors, the number of sectors was in some cases handled incorrectly and, converting a QEMU image failed with an "invalid argument" error. 

This bug has been resolved by updating to a fixed version of QEMU-img that resolves the incorrect calculation issue that caused this error. Creating volume from image now works successfully.

6.1.5. openstack-glance

BZ#1118578
The Image Service now features improved logging, providing better information to users. In addition, logs have been stripped of any sensitive information, and use the appropriate logging levels for messages. This change is only visible to operators.
BZ#1151300
With this update, it is now possible to dynamically reload the Image service configuration settings by sending a SIGHUP signal to the 'glance-*' process. This signal will ensure the process re-reads the configuration file and load any new configurations. As a result, there is no need to restart the entire Image service to apply the configuration changes.
BZ#1155388
With this update, the underlying asynchronous task engine has been changed. It is now based on the taskflow library. While this does not introduce changes to the API or workflow, it adds the following new configuration option:

[taskflow_executor]
engine_mode = serial # or parallel
BZ#1164520
Previously, the glance-manage utility was configured using 'glance-api.conf' or 'glance-registry.conf'. This release features a new configuration file named 'glance-manage.conf', which can be used to configure glance-manage. You can still use 'glance-api.conf' and 'glance-registry.conf' to configure glance-manage, but any 'glance-manage.conf' settings will take precedence.
BZ#1168371
Previously, Image service's 'swift' store implementation stored all images on a single container. While this worked well, it created a performance bottleneck in large scale deployments.

With this update, it is now possible to use several Object Storage containers as storage for the 'glance' images. In order to use this feature, you need to set 'swift_store_multiple_containers_seed' to a value bigger than '0'. You can disable using multiple containers by enabling the 'swift_uer_multi_tenant' parameter, as these containers are split on a per-tenant basis.
BZ#1170475
The glance_store library now supports more storage capabilities. As such, you now have more granular control over what operations are allowed in a specific store. This release features the following capabilities:

 - READ_ACCESS: Generic read access 
 - WRITE_ACCESS: Generic write access
 - RW_ACCESS  : READ_ACCESS and WRITE_ACCESS
 - READ_OFFSET: Read all bits from a offset (Included in READ_ACCESS)  
 - WRITE_OFFSET: Write all bits to a offset  (Included in WRITE_ACCESS) 
 - RW_OFFSET  : READ_OFFSET and WRITE_OFFSET
 - READ_CHUNK : Read required length of bits (Included in READ_ACCESS)  
 - WRITE_CHUNK: Write required length of bits  (Included in WRITE_ACCESS) 
 - RW_CHUNK: READ_CHUNK and WRITE_CHUNK  
 - READ_RANDOM: READ_OFFSET and READ_CHUNK  
 - WRITE_RANDOM: WRITE_OFFSET and WRITE_CHUNK
 - RW_RANDOM: RW_OFFSET and RW_CHUNK  
 - DRIVER_REUSABLE: driver is stateless and its instance can be reused safely
BZ#1170476
With this update, a completely new API that adds search capabilities for Image service and improves the performance for listing and search operations, especially on interactions with the UI is now available.

The search API allows users to execute a search query and get back search hits that match the query. The query can either by provided using a simple query string as a parameter, or using a request body. All the search APIs can be applied across multiple types within an index, and across multiple indices with support for multi index syntax.

Note: This enhancement will be removed from the Image service during the RHEL OpenStack Plaform 8 (Liberty) release.
BZ#1189811
Previously, every call to policy.enforce passed an empty dictionary as the target. This prevented operators from using tenant specific restrictions in their policy.json files since the target would always be an empty dictionary. If you tried to restrict some actions so an image owner (users with the correct tenant id) could perform actions, the check categorically failed because the target is an empty dictionary.

With this update, you can pass the ImageTarget instance wrapping an Image to the enforcer so these rules can be used and properly enforced. You can now properly grant access to the image owner(s) based on tenant (e.g., owner:%(tenant)). Without this fix, the only check that actually works in Image service is a RoleCheck (e.g., role:admin).
BZ#1198911
With this update, it is now possible to filter the list operations by more than one filter option and in multiple directions. For example:

  /images?sort=status:asc,name:asc,created_at:desc

With the above, a list of images will be returned and they will be sorted by status, name, and creation date with the following directions respectively: ascending, ascending, and descending.
BZ#1201116
With this change, it is now possible to filter the list operations by more than one filter option and in multiple directions. For example:

  /images?sort=status:asc,name:asc,created_at:desc

With the above, a list of images will be returned and they will be sorted by status, name, and creation date with the following directions respectively: ascending, ascending, and descending.

6.1.6. openstack-heat

BZ#1042222
The Orchestration service now includes an "OS::Heat::Stack" resource type. This OpenStack-native resource is used to explicitly create a child stack in a template. The "OS::Heat::Stack" resource type includes a 'context' property with a 'region_name' subproperty, allowing Orchestration service to manage stacks in different regions.
BZ#1053078
Resources of type AWS::EC2::SecurityGroup can now be updated in-place when their rules are modified. This is consistent with the behaviour of AWS::EC2::SecurityGroup in CloudFormation. Previously, security groups would be replaced if they were modified.
BZ#1108981
Heat now supports user hooks, which pause execution of stack operations at specified points to allow the user to insert their own actions into Heat's workflow. Hooks are attached to resources in the stack's environment file. Currently supported hook types are 'pre-create' and 'pre-update'.
BZ#1122774
The OS::Nova::Server resource type now includes a 'console_urls' property. This enables the user to obtain the URL for the server's console (such as a VNC console) from the resource.
BZ#1142563
When querying a resource in the Orchestration API, a user can now request the value of one or more of the resource's attributes be included in the output. This can aid debugging, as it allows the user to retrieve data from any resource at any time without having to modify the stack's template to include that data in the outputs section.
BZ#1143805
The OS::Cinder::Volume resource type now includes a 'scheduler_hints' property. This allows scheduler hints to be passed to the Block Storage service when creating a volume, and requires v2 of the Block Storage API.
BZ#1144230
The heat-manage command now includes a subcommand "heat-manage service-list". This subcommand displays information about active "heat-engine" processes, where they are running, and their current status.
BZ#1149959
The OS::Neutron::Port resource type now supports a 'binding:vnic_type' property. This property enables users with the appropriate permissions to specify the VNIC type of an OpenStack Networking port.
BZ#1156671
The AWS::AutoScaling::AutoScalingGroup resource type now supports an 'InstanceId' property. This allows the launch configuration for an autoscaling group to be cloned from an existing server instead of an AWS::AutoScaling::LaunchConfiguration resource.
BZ#1159598
The AWS::AutoScaling::LaunchConfiguration resource type now supports an 'InstanceId' property. This allows the launch configuration for an autoscaling group to be cloned from an existing server.
BZ#1212625
Previously, when the 'files' section of an environment were changed in a stack update, the Orchestration service combined new files with the old stack definition to calculate the previous state. The objective of this was to compare the previous state against the new files and new template.
As a result, the Orchestration service did not notice changes in the included files; so any updates, based solely on changes to the files, would not occur. In addition, if a previously-referenced file was removed from the environment in a stack update, the stack update would fail (though later updates with the same data could succeed).

With this release, the Orchestration service now combines the old stack with the old files to compare against the new template and new files. Updates now work as expected when editing included files in the environment.
BZ#1218692
In previous releases, changes to the absolute path of a template for a template resource (as in, a resource implicitly backed by a stack) were not recognized by the Orchestration service. This prevented nested stacks backing a template resource from being updated whenever that resource's template was renamed or moved. 

With this release, the Orchestration service can now detect such changes, thereby ensuring that nested stacks are updated accordingly.

6.1.7. openstack-ironic

BZ#1151691
Bare Metal now supports the management interface of HP ProLiant Services using the iLO client python library. This allows Bare Metal to perform management operations such as retrieving/setting a boot device.
BZ#1153875
The Bare Metal service can now use cloud-init and similar early-initialization tools to insert user data on instances. Previously, doing so would have required setting up a metadata service to perform this function.

With this new update, Bare Metal can insert instance metadata onto local disk upon deployment -- specifically, to a device labeled 'config-2'. Afterwards, you can configure the early-initialization tool to find this device and extract the data from there.
BZ#1154485
The Bare Metal service can now deploy nodes using the Secure Boot feature of the UEFI (http://www.uefi.org). Secure Boot helps ensure that nodes boot only trusted software.

With this, the whole boot chain can be verified at boot time. You can then configure nodes to only boot authorized images, thereby enhancing security.
BZ#1154927
Bare Metal instances now feature a new field named 'maintenance_reason', which can be used to indicate why a node is in maintenance mode.
BZ#1165499
The Bare Metal service now supports Fujitsu iRMC (integrated Remote Management Controller) hardware. With this, Bare Metal can now manage the power state of such machines.
BZ#1198904
All Ironic drivers now support deployment via IPA ramdisk. IPA is written in Python, supports more features than the BASH ramdisk, and runs as a service. For these reasons, nodes deployed through IPA are generally easier to deploy, debug, and manage.
BZ#1230142
Previously, the WSMAN interface on the DRAC card would change between 11g and 12g hardware.
Consequently, `get_boot_device` and `set_boot_device` calls would fail in OpenStack Bare Metal Provisioning (Ironic) when using the DRAC driver on 11g hardware.
With this update, the DRAC driver checks the Lifecycle controller version, and uses alternate methods on different versions to manage the boot device.
As a result, `get_boot_device` and `set_boot_device` operations succeed on 11g nodes.
BZ#1230163
The Compute service expects to be able to delete an instance at any time; however, a Bare Metal instance can only be stopped at a specific stage -- namely, when it is in the DEPLOYWAIT state. As a result, whenever the Compute service attempted to delete a Bare Metal instance that was not in the DEPLOYWAIT state, Compute's attempt failed. In doing so, the instance got stuck in a particular state, thereby required a database change to resolve.

With this release, Bare Metal instances no longer get stuck mid-deployment when Compute attempts to delete them. The Bare Metal service still won't abort an instance unless it is in the DEPLOYWAIT state.
BZ#1231327
Previously, the DRAC driver in OpenStack Bare Metal Provisioning (Ironic) incorrectly recognized the job status 'completed with errors' as an 'in-progress' status. Consequently, `get_boot_device` and `set_boot_device` tasks failed, as they require that no in-progress jobs be present.
This update addresses this issue by adding 'completed with errors' to the list of completed statuses. As a result, `get_boot_device` and `set_boot_device` tasks will proceed even if there is a 'completed with errors' job on the DRAC card.
BZ#1231331
Previously, the `pass_bootloader_install_info` method was missing from the DRAC `vendor_passthru interface`. Consequently, PXE deployment tasks failed when local boot was enabled.
This fix adds the `pass_bootloader_install_info` from the standard PXE interface to `DRAC vendor_passthru`. As a result, deployment is expected to succeed when local boot is enabled.
BZ#1233452
Prior to this update, OpenStack Bare Metal Provisioning (Ironic) operations, such as 'Power off' held a lock on a node for longer than expected. 
Consequently, certain operations would fail to run while the node was still considered locked.
This update adjusts the retry timeout to two minutes. As a result, no further node lock errors have been noted.

6.1.8. openstack-keystone

BZ#1110589
The Identity Service (keystone) now allows for re-delegation of trusts. This allows a trustee with a trust token to create another trust to delegate their roles to others. In addition, a counter enumerates the number of times a trust can be re-delegated.
This feature allows a trustee to re-delegate the roles contained in its trust token to another trustee.  The user creating the initial trust can control if a trust can be re-delegated when this is necessary.
Consequently, trusts can now be re-delegated if the original trust allows it.
BZ#1121844
Identity Service (keystone) now allows for unscoped tokens to be explicitly requested.
This feature was added after users who had a default project assigned were previously unable to retrieve unscoped tokens; if one of these users requested a token without defining a scope, it would be automatically scoped to the default project.
As a result of this update, unscoped tokens can now be issued to all users, even if they have a default project defined.
BZ#1165505
With this update, Identity Service (keystone), is now able to construct a hierarchy of projects by specifying a 'parent_id' within a project resource.
Previously, the Identity service only allowed for a flat project model; a project hierarchy allows for more flexible project structures, which can be used to mimic organizational structures.
As a result, Projects can now define a parent project, allowing project hierarchies to be constructed.
BZ#1189633
The Identity service now allows unscoped federation tokens to be used to obtain a scoped token using the 'token' authentication method.

When using the Identity service's federation extension, an unscoped federation token is returned as a result of the initial authentication. This is then exchanged for a scoped token. An unscoped federation token previously had to use the 'saml2' or 'mapped' authentication to obtain a scoped token. This is inconsistent with the method used to exchanging a regular unscoped token for a scoped token, which uses the 'token' method.

Exchanging an unscoped federation token for a scoped token now uses the 'token' authentication method, which is consistent with the regular unscoped token behavior.
BZ#1189639
The Identity service now restricts rescoping of tokens to only allow unscoped tokens to be exchanged for scoped tokens.

The Identity service allows for an existing token to be used to obtain a new token via the 'token' authentication method.  Previously, a user with a valid token scoped for a project could use that token to obtain another token for a different project that they were authorized for.  This allowed for anyone possessing a user's token to have access to any project the user has access to, as opposed to only having access to the project that the token is scoped for.  To improve the security properties of scoped tokens, it was desirable to not allow this.
 
A new 'allow_rescope_scoped_token' configuration option is available to allow token rescoping to be retricted. Rescoping of tokens is now only allowed by using an unscoped token to authenticate when this option is enabled.
BZ#1196013
The Identity service now has an experimental support for a new token format called 'fernet'.

The token formats currently supported by the Identity service require issued tokens to be persisted in a database table. This table can grow quite large, which requires proper tuning and a flush job to keep the Identity service performing well. The new 'fernet' token format is designed to allow the token database table to be eliminated, avoiding the problem of this table becoming a scalability limitation. The 'fernet' token format is now available as an experimental feature.

6.1.9. openstack-neutron

BZ#1108790
Prior to this update, when manually switching the tunnel source IP address on an Open vSwitch (OVS) agent, other agents kept two tunnels open to the agent: one to its old IP address and one to the new.
As a result, superfluous metadata would build up on all hypervisors in the cloud running the OVS agent.
To address this, the Network node now detects a scenario where an IP address has changed on a host, persists the new information, and notifies the other agents of the IP address change.
BZ#1152579
Previously, the OpenStack Dashboard LBaaS pool details page would not correctly handle the unexpected case of the subnet attached to an LBaaS pool being deleted.
Consequently, if you created a network, subnet, router, and load balancer, and then deleted the network, subnet, and router, but retained the load balancer, the OpenStack Dashboard LBaaS details page would return error 500.
This update addresses this issue by checking for this scenario and displaying a warning message instead. As a result, the LBaaS details page now renders correctly and displays a warning as needed.
BZ#1153446
With this update, administrators are now able to view the state of High Availability routers on each node, and specifically, where the active instance is hosted. 
Previously, the High Availability router state information was not previously visible to the administrator; this made maintenance harder, for example, when moving HA router instances from one agent to another, or assessing the impact of putting a node in maintenance mode. 
This new functionality also serves as a sanity test and offers assurance that a router is indeed active on only one node. As a result, administrators may now run the 'neutron l3-agent-list-hosting-router <router_id>' command on a High Availability router to view where the active instance is currently hosted.
BZ#1158729
OpenStack Networking deployments with distributed routers are now able to allow tenants to create their own networks with VLAN segmentation.
Previously, distributed routers only supported tunnel networks, which may have hindered adoption as many deployments prefer to use VLAN tenant networks.
As a result of this update, distributed routers are now able to service tunnel networks as well as VLAN networks.
BZ#1213148
Red Hat Enterprise Linux OpenStack Platform 7 uses libreswan instead of openswan, however the OpenStack Networking (neutron) openswan VPNaaS driver does not function with libreswan.
With this update, you can enable the libreswan-specific driver in vpnagent.ini:
[vpnagent]
vpn_device_driver=neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDrive

As a result, VPNaaS works as expected.
BZ#1221034
Due to a known issue with the 'python-neutron-fwaas' package, Firewall-as-a-Service (FWaaS) may fail to work. This is a result of the 'python-neutron-fwaas' package missing the database upgrade 'versions' directory.
In addition, upgrading the database schemas between version releases may not function correctly at this time.
BZ#1221076
Due to a known issue with the 'python-neutron-fwaas' package, Firewall-as-a-Service (FWaaS) may fail to work. This is a result of the 'python-neutron-fwaas' package missing the database upgrade 'versions' directory.
In addition, upgrading the database schemas between version releases may not function correctly at this time.
BZ#1227633
Previously, dnsmasq did not save lease information in persistent storage, and when it was restarted, the lease information was lost. This behavior was a result of the removal of the dnsmasq '--dhcp-script' option under BZ#1202392.
As a result, instances were stuck in the network boot process for a long period of time. In addition, NACK messages were noted in the dnsmasq log.
This update addresses this issue by removing the authoritative option, so that NAKs are not sent in response to DHCPREQUESTs to other servers. This change is expected to prevent dnsmasq from NAKing clients renewing leases issued before it was restarted/rescheduled, with the result that no DHCPNAK messages can be found in the log files.
BZ#1228096
In Kilo, Neutron services now can rely on so called rootwrap daemon to execute external commands like 'ip' or 'sysctl'. The daemon pre-caches rootwrap filters and drastically improves overall agent performance.

For RHEL-OSP7, rootwrap daemon is enabled by default. If you want to avoid using it and stick to another root privilege separation mechanism like 'sudo', then make sure you also disable the daemon by setting 'root_helper_daemon =' in [agent] section of your neutron.conf file.

6.1.10. openstack-neutron-lbaas

BZ#1228227
Prior to this update, the .service file was missing for the 'neutron-lbaasv2-agent' service.
Consequently, there was no way to start the agent when under control of systemd.
This update adds the missing .service file to the package. 
As a result, the command 'systemctl start neutron-lbaasv2-agent' should now start the service.

6.1.11. openstack-nova

BZ#1041068
You can now use VMWare vSAN data stores. These stores allow you to use vMotion while simultaneously using hypervisor-local storage for instances.
BZ#1052804
You can now use VMware storage policy to manage how storage is assigned to different instances. This can help you ensure that instances are assigned to the most appropriate storage in an environment where multiple data stores (of varying costs and performance properties) are attached to a VMware infrastructure.
BZ#1085989
Previously, the Compute database had a missing index in the virtual_interfaces table. Because of this, as the table grew large operations on it became unacceptably long, causing timeouts.

This release adds the missing index to the virtual_interfaces table, ensuring that large amounts of data in the virtual_interfaces table do not significantly impact performance.
BZ#1193287
Support has been added for intelligent NUMA node placement for guests that have been assigned a host PCI device. PCI I/O devices, such as  Network Interface Cards (NICs), can be more closely associated with one processor than another. This is important because there are different memory performance and latency characteristics when accessing memory directly attached to one processor than when accessing memory directly attached to another processor in the same server. With this update, Openstack guest placement can be optimized by ensuring that a guest bound to a PCI device is scheduled to run on a NUMA node that is associated with the guest's pCPU and memory allocation. For example, if a guest's resource requirements fit in a single NUMA node, all guest resources will now be associated with the same NUMA node.
BZ#1203160
After fully upgrading to Red Hat Enterprise Linux OpenStack Platform 7 from version 6 (and all nodes are running version 7 code), you should start a background migration of PCI device NUMA node information from the old location to the new location. Version 7 conductor nodes will do this automatically when necessary, but the rest of the idle data needs to be migrated in the background. This is critical to complete before the version 8 release, where support for the old location will be dropped. Use 'nova-manage migrate-rhos-6-pci-device-data' to perform this transition.
Note that this is relevant only for users making use of the PCI pass-through features of Compute.
BZ#1226438
Previously, there was an error when attempting to launch an instance on a nova-network compute node configured by staypuft/openstack-foreman-installer. This was due to package conntrack-tools was missing from the installer.

This bug was fixed by adding a line in openstack-nova.spec to install conntrack-tools package for the nova-network's service. Nova-network can now configure networks and there is no error reported.
BZ#1228295
Previously, when the primary path to a Cinder iSCSI volume was down, a volume could not be attached to the instance, even if the Compute and Block Storage back end driver's multipath feature was enabled. This meant that users of the cloud system could fail to attach a volume (or boot a server booted from a volume). 

With this fix, the host can now have a separate configuration option if the block traffic is on a separate network; the volume is then attached using the secondary path.
BZ#1229655
When deploying an OpenStack environment that uses IPv6, VNC consoles would fail to load and an exception was raised to the client because the websocketproxy was unable to verify the origin header - "handler exception: Origin header does not match this host.”

With this release, the code in websocketproxy has been updated to handle IPv6. As a result, users can now successfully connect to VNC consoles when all services are configured to use IPv6.
BZ#1230237
Previously, when attempting to evacuate a virtual machine in nova failed when used with neutron because of a failure to update port bindings. A similar issue applied to FloatingIP setup for nova-network. As a result, the virtual machine could not be evacuated because the creation of a required virtual interface failed.

With this fix, nova now correctly sets up virtual machine in both kinds of network setup. You can now evacuate virtual machines successfully.
BZ#1230485
The libvirt driver used libguestfs for certain guest inspection and modification tasks. However, libguestfs is an external library that is not updated by eventlet's monkey patch. As a result, eventlet greenthreads did not run during libguestfs API calls; this, in turn, caused the openstack-nova-compute service to hang entirely for the duration of the call. The initial call to libguestfs after installation or a system update can take seconds, during which openstack-nova-compute was unresponsive.

With this release, calls to libguestfs are now pushed to a separate, non-Eventlet threadpool. Such calls now run asynchronously, and do not impact the responsiveness of openstack-nova-compute.
BZ#1242502
Previous releases used incorrect data versioning, which caused the PCI device data model to be sent in an incorrect format. This, in turn, prevented the openstack-nova-compute service from starting if there were any PCI-passthrough devices whitelisted.

This release now uses correct data versioning, thereby allowing openstack-nova-compute to start and register any whitelisted PCI devices.

6.1.12. openstack-packstack

BZ#1185652
This feature adds IPv6 support to Packstack, allowing Packstack to use IPv6 address as values in networking-related parameters such as CONFIG_CONTROLLER_HOST, CONFIG_COMPUTE_HOSTS, and CONFIG_NETWORK_HOSTS.

6.1.13. openstack-puppet-modules

BZ#1231918
Previously, puppet-neutron did not allow for customization of the neutron dhcp_domain setting. As a consequence, the overcloud nodes would be offered an invalid domain suffix by the undercloud DHCP. With this update, the neutron dhcp_domain setting has been made configurable, and defaults to an empty domain suffix.
BZ#1236057
Previously, the HAProxy configuration of the Telemetry service used incorrect checks, which caused the Telemetry service to fail in an HA deployment. Specifically, the HAProxy configuration did not have availability checks, and incorrectly used SSL checks instead of TCP.

This release fixes the checks, ensuring that the Telemetry service is correctly balanced and can launch in an HA deployment.
BZ#1244358
The Director uses misconfigured HAProxy settings when deploying the Bare Metal and Telemetry services with SSL enabled in the undercloud. This prevents some nodes from registering. 

To work around this, comment out 'option ssl-hello-chk' under the Bare Metal and Telemetry sections in /etc/haproxy/haproxy.cfg after installing the undercloud.

6.1.14. openstack-sahara

BZ#1149055
This enhancement adds namenode high availability as a supported option in the HDP 2.0.6 plugin. 
Users can signal that they require a cluster to be generated in HA mode, by passing a cluster with a quorum of zookeeper servers and journalnodes, and at least 2 namenodes. For example:
"cluster_configs": {
   "HDFSHA": {
      "hdfs.nnha": true
   }
}
BZ#1155378
With this enhancement, the Sahara API now fully supports the HTTPS protocol.
BZ#1158163
Prior to this update, Sahara's 'distributed' mode feature was in alpha testing. Consequently, Red Hat Enterprise Linux OpenStack Platform did not package or support the 'sahara-api' or 'sahara-engine' processes individually.
With this update, the 'distributed' mode feature is considered stable, and RHEL OpenStack Platform now provides systemd unit files for the 'sahara-api' and 'sahara-engine' services.
As a result, users can run Sahara in distributed mode, with separation of the API and engine node clusters.
BZ#1164087
Sahara objects can now be queried by any field name. This is done using the GET parameters that match the API field names, as seen on list methods.
BZ#1189500
This enhancement adds a CLI that allows configuration of the default cluster templates for each major plugin. The provision of default templates is expected to speed and facilitate end-user adoption of Sahara.
As a result of this update, administrators can now add shared default templates for adaptation and direct usage by customers.
BZ#1189504
Integration tests for Sahara have been refactored from more brittle pure python tests to allow easy, YAML-based configuration to define "scenarios".
BZ#1189511
Previously, the cm_api library was not packaged by Cloudera for any Linux distribution. The previous CDH plug-in depended on this package, so CDH could not be enabled as a default plug-in prior to this release. Now, a subset of the cm_api library has been added to Sahara's codebase, and CDH is functional and enabled by default.
BZ#1192290
Previously, many of the processes in cluster creation polled infinitely. Now, timeouts have been added for many stages of cluster creation and manipulation, and users are shown appropriate error messages when cluster operations have taken longer than is reasonable.
BZ#1194532
A new endpoint has been added to Sahara that allows queries of the available job types per plug-in and version that the Sahara installation supports. This information is useful both for UI presentation and filtering, and for CLI and REST API users.
BZ#1214817
Prior to this release, Red Hat Enterprise Linux OpenStack Platform did not package or support the sahara-api or sahara-engine processes individually, because Sahara's "distributed" mode was in alpha testing. Now that this feature is stable, RHEL OpenStack Platform provides systemd unit files for the sahara-api and sahara-engine services, and users can use Sahara in distributed mode, with separation of api and engine node clusters.
BZ#1231923
Previously, the HDP plug-in installed the Extra Packages for Enterprise Linux (EPEL) repository on cluster generation, even though neither the plug-in nor the saraha-image-elements package used the repository for any purpose. Consequently, a needless, potentially error-prone step was introduced into HDP cluster generation, and on update these clusters might update with unsupported packages. Now, the repository is no longer installed by the HDP plug-in.
BZ#1231974
A logrotate file that enforces size limitations within the current Red Hat OpenStack standard has been added to prevent log files from becoming too large before they are rotated.
BZ#1238700
Prior to this update, while NameNode HA for HDP was functional and feature complete upstream, Sahara continued to point Oozie at a single NameNode IP for all jobs.
Consequently, Oozie and Sahara's EDP were only successful when a single, arbitrary node was designated active (in an A/P HA model).
This update addresses this issue by directing Oozie to the nameservice, rather than any one namenode.
As a result, Oozie and EDP jobs can succeed regardless of which NameNode is active.

6.1.15. openstack-selinux

BZ#1233154
Prior to this update, Neutron was trying to bind to port that it was not allowed to use. Consequently, SELinux prevented Neutron from working. Now, Neutron is allowed to connect to unreserved ports and runs without issues.
BZ#1240647
Previously, the Neutron VPN agent was started with the wrong context. As a consequence, SELinux prevented the VPN agent from running. With this update, the Neutron VPN agent has the proper context, and as a result, it is able to run in enforcing mode.

6.1.16. python-django-horizon

BZ#1101375
OpenStack Trove instances can now be resized in the OpenStack dashboard user interface by selecting a new flavor for the instance.
BZ#1107490
The 'API Access' page in the dashboard ('Project > Compute > Access & Security > API Access') now provides more information on user credentials. To view this information, click 'View Credentials'. A pop-up displays the user name, project name, project ID, authentication URL, S3 URL, EC2 URL, EC2 access, and secret key.
BZ#1107924
The option to create Block Storage (cinder) volume transfers has been added to the 'Volumes' tab in the OpenStack dashboard. Volume transfers move ownership from one project to another. A donor creates a volume transfer, captures the resulting transfer ID and secret authentication key, and passes that information out of band to the recipient (such as by email or text message). The recipient accepts the transfer, supplying the transfer ID and authentication key. The ownership of the volume is then transferred from the donor to the recipient, and the volume is no longer visible to the donor.

Note the following limitations of the Block Storage API for volume transfers and their impact on the UI design:
1. When creating a volume transfer, you cannot specify who the intended recipient will be, and anyone with the transfer ID and authentication key can claim the volume. Therefore, the dashboard UI does not prompt for a recipient.
2. Current volume transfers are only visible to the donor; users in other projects are unable to view these transfers. So, the UI does not include a project table to view and accept volume transfers, since the current transfers are not visible. Instead, the transfer information is added to the volume details, which are visible by the donor, and the volume state clearly reflects that a transfer has been created. The UI also cannot present to the recipient a pull-down list of transfers to accept.
3. The only time that the authorization key is visible to the donor is in the response from the creation of the transfer; after creation, it is impossible for even the donor to recover it. Since the donor must capture the transfer ID and authorization key in order to send it to the recipient, an extra form was created to present this information to the donor immediately after the transfer has been created.
BZ#1112481
OpenStack Dashboard now uses Block Storage (cinder) version 2 as its preferred version.
Now when a Block Storage client is requested, access is given using cinder version 2, if not specified otherwise.
BZ#1114804
You can now use the dashboard to view, import, and associate metadata definitions that can be used with various resource types (images, artifacts, volumes, flavors, aggregates, etc).
BZ#1121848
In OpenStack Dashboard, the instance detail page now displays the host node. This data is intended to assist when diagnosing issues.
BZ#1124672
This update adds partial support for Domain Admins to the OpenStack Dashboard. In addition, when using Identity Service (keystone) version 3, a newly-created user does not need to have a primary project specified.
BZ#1143807
You can now disable and enable compute hosts through the dashboard. This capability is available through the 'Actions' column of every compute host in 'Admin > Hypervisors > Compute Host'.

Disabling a compute host prevents the scheduler from launching instances using that host.
BZ#1150839
The 'Manage/Unmanage' option has been added to the 'Volumes' tab of the OpenStack dashboard. 'Manage' takes an existing volume created outside of OpenStack and makes it available. 'Unmanage' removes the visibility of a volume within OpenStack, but does not delete the actual volume.
BZ#1156678
The user interface options available in the dashboard for the OpenStack Orchestration service (heat) have been improved. For example, users can now check, suspend, resume, and preview stacks.
BZ#1162436
The results displayed in tables for the Data Processing service can now be filtered to allow the user to see only those results that are relevant.
BZ#1162961
You can now flag a volume as 'Bootable' through the dashboard.
BZ#1166490
The OpenStack dashboard can now use a custom theme. A new setting, 'CUSTOM_THEME_PATH' was added to /etc/openstack_dashboard/local_settings file. The theme folder should contain one _variables.scss file and one _styles.scss file. The _variables.scss file contains all the bootstrap and Horizon-specific variables that are used to style the graphical user interface, and the _styles.scss file contains extra styling.
BZ#1170470
SRIOV can now be configured in the OpenStack dashboard. Options include exposing further information on the 'Port Details' tab, and allowing port type selection during port creation and update.
BZ#1170471
This enhancement allows you to view encryption metadata for encrypted volumes in OpenStack Dashboard (horizon). A function to display encryption metadata was added, and allows the user to click on the "Yes" in the Encrypted column, and be taken to a page where the encryption metadata is visible.
BZ#1186380
When uploading an image through the dashboard, you can now select OVA as its format. In previous releases, OVA was not available as an option.
BZ#1189711
The dashboard now provides wizards for creating and configuring the necessary components of the OpenStack Data Processing feature. These wizards are useful for guiding users through the process of cluster creation and job execution. To use these wizards, go to 'Project > Data Processing > Guides'.
BZ#1189716
This enhancement adds ceilometer IPMI meters to OpenStack Dashboard.
Six ipmi meters have been exported from ceilometer; the methods 'list_ipmi' and '_get_ipmi_meters_info' are used to retrieve the meter data.
BZ#1190312
You can now view details about Orchestration service hosts through the dashboard. To do so, go to 'Admin > System > System Information > Orchestration Services'. This page is only available if the Orchestration service is deployed.

6.1.17. python-glance-store

BZ#1236055
RBD snapshots and cloning are now used for Ceph-based ephemeral disk snapshots. With this update, data is manipulated within the Ceph server, rather than transferred across nodes, resulting in better snapshotting performance for Ceph.

6.1.18. python-ironicclient

BZ#1212134
Previously, certain operations in OpenStack Bare Metal Provisioning (Ironic) would fail to run while the node was in a `locked` state.
This update implements a `retry` function in the Ironic client. As a result, certain operations take longer to run, but do not fail due to `node locked` errors.

6.1.19. python-openstackclient

BZ#1194779
The python-openstackclient package is now re-based to upstream version 1.0.3. This re-base features new fixes and enhancements relating to support for the Identity service's v3 API.

6.1.20. qemu-kvm-rhev

BZ#1216130
On a virtual disk with a high number of sectors, the number of sectors was in some cases handled incorrectly, and converting a QEMU image failed with an "invalid argument" error. This update fixes the incorrect calculation that caused this error, and the described failure no longer occurs.
BZ#1240402
Due to an incorrect implementation of portable memory barriers, the QEMU emulator in some cases terminated unexpectedly when a virtual disk was under heavy I/O load. This update fixes the implementation in order to achieve correct synchronization between QEMU's threads. As a result, the described crash no longer occurs.

6.1.21. sahara-image-elements

BZ#1155241
This package allows users to create HDP 2.0.6 and CDH 5.3.0 images for use in RHEL OpenStack Platform 7.
BZ#1231934
Previously, CDH image generation sometimes failed, because the image creation wrapper script specified too small a space for generation of the CDH image on some systems. Now, the image generation space is increased for CDH images, and images are generated successfully.

6.1.22. sos

BZ#1232720
When using the sosreport utility on a Pacemaker node, one of the MariaDB MySQL server log-files was not properly collected. With this update, the underlying code has been corrected, and the log-file is now collected as expected.
BZ#1240667
Previously, various OpenStack plug-ins for the sosreport utility were incorrectly collecting passwords in plain text. As a consequence, the compressed file created after using sosreport could contain human-readable passwords. This update adds obfuscation of all passwords to sosreport OpenStack plug-ins, and the affected passwords in the sosreport tarball are no longer human-readable.