RHBA-2019:0285 - Bug Fix Advisory
gluster-block & tcmu-runner bug fix update
Bug Fix Advisory
Update gluster-block & tcmu-runner packages that fix several bugs is now available for OpenShift Container Storage 3.11.1
gluster-block is a distributed management framework for block devices, provided as a command line utility. It aims to make Gluster-backed block storage creation and maintenance as simple as possible. gluster-block can provision block devices and export them as iSCSI LUNs across multiple nodes and uses iSCSI protocol for data transfer as SCSI block/commands.
The tcmu-runner packages provide a service that handles the complexity of the LIO kernel target's userspace passthrough interface (TCMU). It presents a C plugin API for extension modules that handle SCSI requests in ways not possible or suitable to be handled by LIO's in-kernel backstore.
Block storage allows the creation of high-performance individual storage units. Unlike the traditional file storage capability that glusterfs supports, each storage volume/block device can be treated as an independent disk drive, so that each storage volume/block device can support an individual file system.
The advisory fixes the following bugs:
- At OCS-3.11.1, the minimum kernel version required for gluster-block is kernel-3.10.0-862.14.4 (RHEL-7.5.4) (BZ#1643185)
- Previously, the DEBUG_SCSI command logs from tcmu-runner daemon were dumped to stdout, which were expected to be recorded by the syslog buffer. Currently, this is not working as expected as most containers are not running syslogd service. With this fix, the DEBUG_SCSI command logs are pushed through syslog system call. Hence, the container host node records the logs in the absence of syslogd service inside the container. (BZ#1649630)
- Block volumes created before load balancing feature does not have a prio_path set. With this fix, genconfig which will run automatically after the upgrade will generate prio_paths for old block volumes. (BZ#1643074)
- Previously, after mounting the block volume if any brick of the block hosting volume goes down, then all multiple paths to the block volume used to enter fail state. This happens when any given single brick is down, the backend glusterfs volume (BHV) response to I/O requests takes too long (~14 mins), while the normal expected response time is 42 seconds. Thus, all applications utilizing this block volume would encounter input-output errors. With this fix, glusterfs block hosting volume's server.tcp-user-timeout variable is set to 42 sec by default (BZ#1624698)
Users of gluster-block and tcmu-runner are advised to upgrade to these updated packages, which fix these bugs.
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64
- BZ - 1596021 - [Tracking] instead of mapper device, single path is used for mounting the block volume
- BZ - 1598740 - On app pod restart, mpath device name is not mapped/created for some blockvolumes in the new initiator side
- BZ - 1619264 - [Tracking BZ#1632719] [F-QE] App Pods with block pvcs attached went into CrashLoopBackState- input/output error
- BZ - 1624698 - [Tracking BZ#1632719] With only 1 node down, multipath -ll shows multiple paths in "failed" state
- BZ - 1632663 - dump all cli failure msgs to stderr
- BZ - 1635591 - tcmu-runner: There is no notification on changing the log-level value in tcmu.conf file
- BZ - 1637688 - OCS 3.10 Multipath Adds ALUA which may not always be supported
- BZ - 1641570 - gluster-block: Fix minor rpm-build warnings
- BZ - 1641575 - dyn-reload: read the config file line by line instead of allocating a fixed 32K buffer
- BZ - 1643074 - genconfig: set prio-path for old block volumes
- BZ - 1643185 - gluster-block: adopt 3.10.0-862.14.4 (RHEL-7.5.4) as the minimum required
- BZ - 1649630 - tcmu-runner/dyn-logger: DEBUG SCSI CMD loglevel is not working in the container which has no syslogd service installed
Red Hat Gluster Storage Server for On-premise 3 for RHEL 7