Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

3.4. Known Issues

These known issues exist in Red Hat OpenStack Platform at this time:
BZ#1177611
A known issue has been identified for interactions between High Availability (VRRP) routers and L2 Population. Currently, when connecting a HA router to a subnet, HA routers use a distributed port by design. Each router has the same port details on each node that it's scheduled on, and only the master router has IPs configured on that port; all the slaves have the port without any IPs configured.
Consequently, L2Population uses the stale information to advise that the router is present on the node (which it states in the port binding information for that port).
As a result, each node that has a port on that logical network has a tunnel created only to the node where the port is presumably bound. In addition, a forwarding entry is set so that any traffic to that port is sent through the created tunnel. 
However, this action may not succeed as there is not guarantee that the master router is on the node specified in the port binding. Furthermore, in the event that the master router is in fact on the node, a failover event would cause it to migrate to another node and result in a loss of connectivity with the router.
BZ#1234601
The Ramdisk and Kernel images booted without specifying a particular interface. This meant the system booted from any network adapter, which caused problems when more than one interface was on the Provisioning network. In those cases it was necessary to specify which interface the system should use to boot. The interface specified should correspond to the interface which carried the MAC address from the instackenv.json file.

As a workaround, copy and paste the following block of text as the root user into the director's terminal.This creates a systemd startup script sets these parameters on every boot.

The script contains a sed command which includes "net0/mac". This sets the director to use the first Ethernet interface. Change this to "net1/mac" to use the second interface, and so on.

#####################################
cat << EOF > /usr/bin/bootif-fix
#!/usr/bin/env bash

while true;
        do find /httpboot/ -type f ! -iname "kernel" ! -iname "ramdisk" ! -iname "*.kernel" ! -iname "*.ramdisk" -exec sed -i 's|{mac|{net0/mac|g' {} +;
done
EOF

chmod a+x /usr/bin/bootif-fix

cat << EOF > /usr/lib/systemd/system/bootif-fix.service
[Unit]
Description=Automated fix for incorrect iPXE BOOFIF

[Service]
Type=simple
ExecStart=/usr/bin/bootif-fix

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable bootif-fix
systemctl start bootif-fix

#######################################

The bootif-fix script runs on every boot. This enables booting from a specified NIC when more than one NIC is on the Provisioning network. To disable the service and return to the previous behavior, run "systemctl disable bootif-fix" and reboot.
BZ#1237009
The swift proxy port is denied in the Undercloud firewall. This means the swift proxy only accepts connections from localhost. As a workaround, open the swift proxy port in the firewall:

# sudo iptables -I INPUT -p tcp --dport 8080 -j ACCEPT

This enabled connections to the swift proxy from remote machines.
BZ#1268426
There is a known issue that can occur when IP address conflicts are identified on the Provisioning network. As a consequence, discovery and/or deployment tasks will fail for hosts which are assigned an IP address which is already in use.
You can work around this issue by performing a port scan of the Provisioning network. Run from the Undercloud node, this will help validate whether the IP addresses used for the discovery and host IP ranges are available for allocation. You can perform this scan using the nmap utility. For example (replace the network with the subnet of the Provisioning network (in CIDR format)):
----
$ sudo yum install -y nmap
$ nmap -sn 192.0.2.0/24
----
As a result, if any of the IP addresses in use will conflict with the IP ranges in undercloud.conf, you will need to either change the IP ranges, or free up the IP addresses before running the introspection process, or deploying the Overcloud nodes.
BZ#1272591
The Undercloud used the Public API to configure service endpoints during the post-deployment stage. This meant the Undercloud needed to reach the Public API in order to complete the deployment. If the External uplink on the Undercloud is not the same subnet as the Public API, the Undercloud requires a route to the Public API and any firewall ACLs must allow this traffic. With this route, the Undercloud connects to the Public API and completes post-deployment tasks.
BZ#1290881
The default driver with Block Storage service is the internal LVM software iSCSI driver. This is the volume back-end which manages local volumes.

However, the Cinder iSCSI LVM driver has significant performance issues. In production environments, with high I/O activity, there are many potential issues which could affect performance or data integrity,

Red Hat strongly recommends using a certified Block Storage plug-in provider for storage in a production environment. The software iSCSI LVM driver should be used and is only supported for single node evaluations and proof of concept environments.
BZ#1293979
Updating packages on the Undercloud left the Undercloud in an indeterminate state. This meant some Undercloud services were disabled after the package update and could not start again. As a workaround, run 'openstack undercloud install' to reconfigure all Undercloud services. After the command complete, the Undercloud services operate normally.
BZ#1295374
It is currently not possible to establish the Red Hat Openstack Platform Director 10 with VxLAN over VLAN tunneling as the VLAN port is not compatible with the DPDK port. 

As a workaround, after deploying the Red Hat Openstack Platform Director with VxLAN, run the following:

# ifup br-link
# systemctl restart neutron-openvswitch-agent

* Add the local IP addr to br-link bridge
# ip addr add <local_IP/PREFIX> dev br-link

* Tag br-link port with the VLAN used as tenant network VLAN ID.
# ovs-vsctl set port br-link tag=<VLAN-ID>
BZ#1463061
When using Red Hat Ceph Storage as a back end for both Block Storage (cinder) volumes and backups, any attempt to perform an incremental backup will result in a full backup instead, without any warning.
BZ#1321179
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.