- Issued:
- 2017-09-21
- Updated:
- 2017-09-21
RHEA-2017:2773 - Product Enhancement Advisory
Synopsis
new packages: gluster-block
Type/Severity
Product Enhancement Advisory
Red Hat Lightspeed patch analysis
Identify and remediate systems affected by this advisory.
Topic
New gluster-block packages are now available for Red Hat Gluster Storage 3.3 on Red Hat Enterprise Linux 7.
Description
gluster-block is a distributed management framework for block devices,
provided as a command line utility. It aims to make Gluster-backed block
storage creation and maintenance as simple as possible. gluster-block
can provision block devices and export them as iSCSI LUNs across multiple
nodes, and uses iSCSI protocol for data transfer as SCSI block/commands.
Block storage allows the creation of high performance individual storage
units. Unlike the traditional file storage capability that glusterfs
supports, each storage volume/block device can be treated as an independent
disk drive, so that each storage volume/block device can support an
individual file system.
This enhancement update adds the gluster-block package to
Red Hat Gluster Storage 3.3. It also adds the supporting package
tcmu-runner, a user space service that exports files on Gluster volumes
as iSCSI back-end storage.
All users who require gluster-block should install these new packages.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64
Fixes
- BZ - 1383116 - [RFE][gluster-block]:Need tcmu-runner packaged in RHGS
- BZ - 1418228 - [RFE] Gluster block storage
- BZ - 1425460 - [gluster-block]:package targetcli needed by gluster-block
- BZ - 1425465 - [gluster-block]:package gluster-block
- BZ - 1430622 - [gluster-block]:package python-rtslib needed by gluster-block
- BZ - 1444770 - [gluster-block]: Authentication failure while discovering targets from an initiator
- BZ - 1445364 - [gluster-block]:Gluster-block command doesn't errors out or gives message when the gluster-blockd service is not running on the node
- BZ - 1446572 - gluster-block: doesn't support json responses
- BZ - 1446581 - gluster-block: single login to discover all gateways
- BZ - 1446584 - gluster-block: daemon crashes when an invalid hostname is given
- BZ - 1447360 - [gluster-block]:Creates block device even with invalid host IP
- BZ - 1448433 - [gluster-block]:tcmu-runner crashes if glusterd is stopped and try to start tcmu-runner
- BZ - 1449234 - [gluster-block]: Add iscsid service also as part of gluster-blockd init so it starts iscsid when we start gluster-blockd
- BZ - 1449245 - [gluster-block]:vmcore generated when deleting the block with ha 3 when one of the node is down
- BZ - 1449627 - [gluster-block] rebase rhgs-3.3 tcmu-runner to upstream tcmu-runner master
- BZ - 1449690 - tcmu-runner fails to run in a container.
- BZ - 1450824 - "gluster-block vol delete" dont work properly
- BZ - 1450983 - [gluster-block]:gluster-block list fails to provide the output and crashes gluster-block
- BZ - 1451687 - [gluster-block]:After upgrading to latest tcmu-runner , gluster-blockd fails to start because it can not start tcmu-runner
- BZ - 1452036 - Gluster-block fails to create volumes and no proper return.
- BZ - 1452194 - [gluster-block]:Create of gluster-block fails after setting the block profile because of data not getting sync
- BZ - 1452198 - [gluster-block]:Segmentation fault when cancelling the executed gluster-block command
- BZ - 1452919 - heap-buffer-overflow in gluster-blockd
- BZ - 1452936 - tcmu-runner crashes when we create 10 blocks and delete them in a loop
- BZ - 1453179 - gluster-blockd exits on SIGPIPE
- BZ - 1454335 - gluster-block info doesn't show status of configured nodes when the node is down at the time of delete
- BZ - 1454672 - error message needs to be better when tcmu-runner is not running and gluster-block cmd is executed
- BZ - 1454687 - gluster-blockd crashes when one of the config nodes is not reachable
- BZ - 1455992 - Poor performance on gluster block device
- BZ - 1456122 - gluster-block neither gets created nor gets deleted
- BZ - 1456226 - use-after free on latest version of tcmu-runner
- BZ - 1456227 - Use after free when doing targetcli clearconfig confirm=True
- BZ - 1456231 - gluster-blockd gets OOM killed
- BZ - 1456686 - Use after free bug in parse_imagepath is crashing tcmu-runner
- BZ - 1458656 - gluster-blockd: server nodes doesn't log
- BZ - 1459839 - gluster-block: option to choose prealloc = full | off
- BZ - 1461118 - gluster-blockd comes up even when rpcbind service is not running
- BZ - 1463173 - [gluster-block]: Block create command fails on rhel7.4 with python exception
- BZ - 1464402 - gluster-block: system() check for WIFEXITED before WEXITSTATUS
- BZ - 1464404 - logger: log (null) while trying to print volume name
- BZ - 1464405 - gluster-block: cli segfault if log dir doesn't exist
- BZ - 1464408 - gluster-block: cli output looks weird, merging multiple lines without any spaces
- BZ - 1464418 - tcmu-runner: fix pthread_*() return value checks
- BZ - 1464419 - tcmu-runner: fix fd leak
- BZ - 1464421 - tcmu-runner: protect glfs objects cache from race
- BZ - 1464493 - gluster-block: remove iscsid.service from systemd unit and add rpcbind as dependency in spec
- BZ - 1464501 - tcmu-runner: Update spec with 'Requires: libtcmu'
- BZ - 1464641 - gluster-block create prints "(null)" when tcmu-runner is not running
- BZ - 1468506 - gluster-block: block delete causing kernel crash/reboot due to page_fault
- BZ - 1470349 - create: for an HA >1 target portals are not created as expected
- BZ - 1473162 - [Gluster-block]: VM core generated, with gluster-block (failed) create
- BZ - 1474188 - kernel/tcmu-runner: fix uio_poll crash during uio device removal
- BZ - 1474256 - [Gluster-block]: Block continues to exist in a non-healthy state after a failed delete
- BZ - 1477225 - logger: make INFO as default loglevel
- BZ - 1477455 - app pod with block vol as pvc fails to restart as iSCSI login fails
- BZ - 1477547 - Gluster block should have log file path configurable
- BZ - 1479355 - make sure we bring glusterd up before gluster-blockd and rest of the services comesup
- BZ - 1482057 - gluster-block: cli times out after ~400 block creates on a node
- BZ - 1483827 - Avoid using 24006 port as it is registered.
- BZ - 1487527 - gluster-block device creation fails
- BZ - 1489744 - gluster-block spec: remove targetcli higher version dependency check than what rhel provides
- BZ - 1489745 - tcmu-runner spec: remove targetcli higher version dependency check than what rhel provides
- BZ - 1491758 - [CNS env]: app pod crashes when node containing gluster pod is rebooted
- BZ - 1492556 - [CNS env] - gluster-blockd crashed when a bunch of block devices were deleted
- BZ - 1493113 - tune loglevel to file gluster-block-configshell.log as INFO from DEBUG
CVEs
(none)
References
(none)
Red Hat Gluster Storage Server for On-premise 3 for RHEL 7
| SRPM | |
|---|---|
| gluster-block-0.2.1-14.el7rhgs.src.rpm | SHA-256: 1e2c4882fb66a14a4b3363b83351931add3c59935f111db71f04a98e0b29c6d4 |
| tcmu-runner-1.2.0-15.el7rhgs.src.rpm | SHA-256: 87a26b019a8cc8e270344701a17b154a3135d3a4be06c423d00d6f5f658960c5 |
| x86_64 | |
| gluster-block-0.2.1-14.el7rhgs.x86_64.rpm | SHA-256: 13aa7a1812d471023811968a9a0ea886dab5b97f2ce2fd55eaa5a32670f8f374 |
| gluster-block-debuginfo-0.2.1-14.el7rhgs.x86_64.rpm | SHA-256: 00d6b16ebc43821b12461813f323ab38e534783395711e9999e30af2ac2031ca |
| libtcmu-1.2.0-15.el7rhgs.x86_64.rpm | SHA-256: 7b04d5155a4cf1748629a5671a701c50ab10d099ebbdc73d767362d8edec185a |
| libtcmu-devel-1.2.0-15.el7rhgs.x86_64.rpm | SHA-256: fc93604bb6f9764b0c04363631a1deaa016044921b56480c6e18ea4831677f51 |
| tcmu-runner-1.2.0-15.el7rhgs.x86_64.rpm | SHA-256: 699962bad82c50ce454a92aeb7d39879e8db63735fe2d2c8269f07ace06c0d67 |
| tcmu-runner-debuginfo-1.2.0-15.el7rhgs.x86_64.rpm | SHA-256: a607b29572284e9747dae87cefa5b0fc068a549b646c6197640c9042cca17ce8 |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.