RHBA-2019:3257 - Bug Fix Advisory
rhgs-server-container bug fix update
Bug Fix Advisory
An updated rhgs-server-container image that fixes several bugs is now
available in the Red Hat Container Registry.
The OpenShift Container Storage solution provides persistent storage service for OpenShift Containers and OpenShift Infrastructure services.
The advisory fixes the following bugs:
- With this update, the gluster volume that hosts the glusterblock files is tuned for better performance by increasing the count of I/O handling threads. (BZ#1708182)
- Previously, a race condition during execution caused the CTR translator to crash when bricks were added and removed from the brick multiplexing process. Hence, the bricks which crashed were not available online for the volume. With this fix, the CTR translator is not loaded to the volumes that do not need it and the crash is no longer observed. (BZ#1601791)
- If a user configured more than 1500 volumes in a 3 node cluster, and a node or glusterd service became unavailable, then during reconnection there was too much volume information to gather before the handshake process timed out. This issue is resolved by adding several optimizations to the volume information gathering process. (BZ#1710994)
- Previously, iscsi login failed on node reboot when the gluster-blockd service was not started because the gluster-block service was not part of the readiness checklist. With this update, the gluster-blockd service is now part of the readiness checklist on reboot, so this situation no longer occurs. (BZ#1597726)
- Previously, applications using blockvolumes hung on I/O when the target server lost connection to any of the bricks. With this fix, the hang is not observed as long as the volume still meets its quorum. (BZ#1623438)
All users of rhgs-server-container image are advised to pull this updated
image from the Red Hat Container Registry which fixes these bugs.
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64
- BZ - 1597726 - On reboot of the CNS node, ISCSI login failed to its own target IP
- BZ - 1601791 - Core dump in CTR xlator while running pv create, delete and gluster volume heal in parallel
- BZ - 1623438 - [Tracking-RHGS-BZ#1623874] IO errors on block device post rebooting one brick node
- BZ - 1708182 - [Tracking] gluster-block: improvements to volume group profile options list
- BZ - 1710994 - [RFE] Raise the supported limit of volumes per trusted storage pool to 2000
- BZ - 1729027 - Bricks of the volumes are not online after performing an upgrade from OCS3.11.3 to OCS3.11.4 on one of the pod.
- BZ - 1746908 - With latest images gluster pod is in 0/1 state because of glusterd services not up
- BZ - 1758784 - [Tracker #1757420] memory leak in glusterfsd with error from iot_workers_scale function
- BZ - 1762882 - Respin rhgs-server container for OCS 3.11.4 with signed rpms
Red Hat Gluster Storage Server for On-premise 3 for RHEL 7