Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

3.3. Red Hat OpenStack Platform 10 Maintenance Release 26 June 2018

These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.

3.3.1. Enhancements

This release of Red Hat OpenStack Platform features the following enhancements:
BZ#1365865
The Red Hat OpenDaylight controller does not support clustering in this release, but High Availability is provided for the neutron API service by default.
BZ#1568355
The dpdkvhostuserclient mode support has been backported. This feature allows OVS to connect to the vhost socket as a client, which allows for reconnecting to the socket without restarting the VM (if OVS crashes or restarts).

NOTE: 
* All VMs should be migrated to dpdkvhostuserclient mode

* Live Migration does not work for the existing VM, use either snapshot and create or cold-migration to move to dpdkvhostuserclient mode.

* Add/Modify the  parameter NeutronVhostuserSocketDir to "/var/lib/vhost_sockets".

* Also for a new installation, remove the "set_ovs_config" section in the sample first-boot script[1].

* Add the additional environment file environments/ovs-dpdk-permissions.yaml for OVS-DPDK deployments (for new installations and minor updates).

* All these validations are done with OVS version 2.9.

[1] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html-single/network_functions_virtualization_configuration_guide/index#ap-ovsdpdk-first-boot

3.3.2. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat OpenStack Platform. You must take this information into account to ensure the best possible outcomes for your deployment.
BZ#1587957
During the start of the deployment with ovs2.9, the OVS-DPDK is enabled using first-boot before applying the kernel arguments and rebooting. This will cause vswitchd service to fail initially, but later when the kernel arguments are applied and the node has been rebooted, vswtichd will run as expected with DPDK enabled. The failure messages at the initial stage before the reboot can be ignored.

3.3.3. Known Issues

These known issues exist in Red Hat OpenStack Platform at this time:
BZ#1394402
In order to reduce any interruptions to the allocated CPUs while running either Open vSwitch, virtual machine CPUs or the VNF threads within the virtual machines as much as possible, CPUs should be isolated. However, CPUAffinity cannot prevent all kernel threads from running on these CPUs. To prevent most of the kernel threads, you must use the boot option 'isolcpus=<cpulist>'. This uses the same CPU list as 'nohz_full' and 'rcu_nocbs'. The 'isolcpus' is engaged right at the kernel boot, and can thus prevent many kernel threads from being scheduled on the CPUs. This could be run on both the hypervisor and guest server.

#!/bin/bash
isol_cpus=`awk '{ for (i = 1; i <= NF; i++) if ($i ~ /nohz/) print $i };'
/proc/cmdline | cut -d"=" -f2`
 
if [ ! -z "$isol_cpus" ]; then
  grubby --update-kernel=grubby --default-kernel --args=isolcpus=$isol_cpus
fi


2) The following snippet re-pins the emulator thread action and is not recommended unless you experience specific performance problems.

#!/bin/bash
cpu_list=`grep -e "^CPUAffinity=.*" /etc/systemd/system.conf | sed -e 's/CPUAffinity=//' -e 's/ /,/'`
 if [ ! -z "$cpu_list" ]; then
   virsh_list=`virsh list| sed -e '1,2d' -e 's/\s\+/ /g' | awk -F" " '{print $2}'`
     if [ ! -z "$virsh_list" ]; then
       for vm in $virsh_list; do virsh emulatorpin $vm --cpulist $cpu_list; done
     fi
 fi
BZ#1394537
After a `tuned` profile is activated, `tuned` service must start before the `openvswitch` service does, in order to set the cores allocated to the PMD correctly. 

As a workaround, you can change the `tuned`  service by running the following script:

#!/bin/bash

tuned_service=/usr/lib/systemd/system/tuned.service

grep -q "network.target" $tuned_service
if [ "$?" -eq 0 ]; then
  sed -i '/After=.*/s/network.target//g' $tuned_service
fi

grep -q "Before=.*network.target" $tuned_service
if [ ! "$?" -eq 0 ]; then
  grep -q "Before=.*" $tuned_service
  if [ "$?" -eq 0 ]; then
    sed -i 's/^\(Before=.*\)/\1 network.target openvswitch.service/g' $tuned_service
  else
    sed -i '/After/i Before=network.target openvswitch.service' $tuned_service
  fi
fi

systemctl daemon-reload
systemctl restart openvswitch
exit 0
BZ#1489070
The new iptables version that ships with RHEL 7.4 includes a new --wait parameter. This parameter allows  iptables commands issued in parallel to wait until a lock is released by the prior command. For OpenStack, the neutron service provides the iptables locking but only on the routers level.

As such, when processing routers (for example, during a fullsync after the l3 agent is started), some iptables commands issued by neutron may fail because they are experiencing this lock and require the --wait parameter that is not available in neutron yet. Any routers affected by this will cause malfunctions of some floating IPs, or some instances may not access the metadata API during cloud-init.

We recommend that you do not upgrade to RHEL 7.4 until neutron is released with a fix that adopts the new iptables --wait parameter.

3.3.4. Deprecated Functionality

The items in this section are either no longer supported, or will no longer be supported in a future release.
BZ#1402497
Certain CLI arguments are considered deprecated and should not be used. The update will allow you to use the CLI args, but there is still a need to specify at the least an environment file to set the `sat_repo`. You can use an `env` file to work around the issue, before running the overcloud command:

1. cp -r /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration  . 

2. Edit the rhel-registration/environment-rhel-registration.yaml and set the   rhel_reg_org, rhel_reg_activation_key, rhel_reg_method, rhel_reg_sat_repo and rhel_reg_sat_url according to your environment.

3. Run the deployment command with -e rhel-registration/rhel-registration-resource-registry.yaml -e rhel-registration/environment-rhel-registration.yaml

This workaround has been checked for both Red Hat Satellite 5 and 6, with repos present on the overcloud nodes upon successful deployment.