4.3. RHEA-2018:2332 — Red Hat OpenStack Platform 12.0 Security Advisory August 2018
Virtual CPUs (vCPUs) can be preempted by the hypervisor kernel thread even with strong partitioning in place (isolcpus, tuned). Preemptions are not frequent, a few per second, but with 256 descriptors per virtio queue, just one preemption of the vCPU can lead to packet drop, because the 256 slots are filled during the preemption. This is the case for network functions virtualization (NFV) VMs in which the per queue packet rate is above 1 Mpps (1 million packets per second). This release supports two new tunable options: 'rx_queue_size' and 'tx_queue_size'. Use these options to configure the RX queue size and TX queue size of virtio NICs, respectively, to reduce packet drop.
Previously, the ability to set an admin password to the metadata service was not implemented for the libvirt driver causing the 'nova get-password' command to return nothing. This release enables setting an admin password to the metadata service for the libvirt driver. The admin password is saved to the metadata service, and the 'nova get-password' command returns that password.
This update slows the initial stages of live migrations to eliminate packet loss. Previously, instances with LinuxBridge VIFs experienced packet loss during live migration. Neutron did not have enough time to complete the plugging of the VIFs and related networking infrastructure on the destination during live migration. Live migrations are now initially slowed to ensure Neutron has adequate time to wire up the VIFs on the destination. Once complete, Neutron sends an event to Nova, returning the migration to full speed. This requires Neutron 11.0.4 or greater on Pike when used with LinuxBridge VIFs to pick up the Icb039ae2d465e3822ab07ae4f9bc405c1362afba bugfix.
Prior to this update, to re-discover a compute node record after deleting a host mapping from the API database, the compute node record had to be manually marked as unmapped. Otherwise, a compute node with the same hostname could not be mapped back to the cell from which it was removed. With this update, the compute node record is automatically marked as unmapped when you delete a host from a cell, enabling a compute node with the same hostname to be added to the cell during host discovery.
This update prevents CPU pinning mismatches during Nova live migrations. Prior to the update, the scheduler did not check whether the guest CPU pinning configuration was supported on the host. A mismatch of CPU pinning caused errors during bootup of the the host. This failed scenario could be repeated over a series of potential hosts. A new condition in the NUMATopologyFilter filter identifies hosts with proper CPU pinning capability. If no suitable hosts are available, the migration fails quickly with an error message.
This update prevents an unintended bypass of the schedule filters This update prevents an unintended bypass of the schedule filters that could occur after the scheduler refused a rebuild request sent by nova. If a user rebuilds an instance with a new image, the change from old image to new image causes nova to send the rebuild request to the scheduler to make sure it is allowed according to the scheduler filters. Prior to this update, if the scheduler refused the request, the instance's image reference was not rolled back to the original image. This caused an inconsistency between the original image actually in use by the instance, and the new image reference saved in the database. As a result, a second rebuild request with the same new image would bypass the scheduler and be allowed, because the image in the rebuild request was the same as the instance's image in the database, even though the real image in use by the instance was the old original image. This bypass of scheduler filters was considered a security flaw. As of this update, when a rebuild request is refused by the scheduler,the image reference is rolled back to the original. If another rebuild request is made with the same new image, it is correctly identified as being different from the instance's current image and the request is send to the scheduler.
Prior to this update, a volume detach operation performed under certain failure scenarios could result in the removal of a volume's libvirt definition without full removal of the associated logical volume (LUN) from the host. This allowed Cinder to incorrectly perform subsequent operations while the compute host still had active paths to the device. As of this update, even under a failure scenario, Nova compute attempts to disconnect the LUN from the host. The result is a better release of the logical volume on the host.