RHEA-2017:2879 - Product Enhancement Advisory
heketi bug fix and enhancement update
Product Enhancement Advisory
Updated heketi packages that fix several bugs and add various enhancements are now available for Container-Native Storage 3.6 and Container Ready Storage.
Heketi provides the Red Hat Gluster Storage volume lifecycle management. It creates the Red Hat Gluster Storage volumes dynamically and supports multiple Red Hat Gluster Storage clusters.
This update adds the following enhancements:
- With this update, block volume creation using heketi is introduced in Container-Native Storage 3.6. This new feature extends the heketi feature to dynamically provision and creates block volumes using gluster-block. (BZ#1446069)
- With this update, gluster volume options can be set in Container-Native Storage (CNS) and Container-Ready Storage (CRS) using the heketi-cli. The 'gluster-volume-options' flag can be used to set volume options during volume creation. (BZ#1480123)
In addition, this update fixes the following bugs:
- Previously, the prescribed way to replace a failed node was to run multiple commands on the node devices followed by the node delete command. With this update, the removal of a failed node has been enhanced with a single command to replace a failed node.
For example: heketi-cli node remove [node-id] (BZ#1349875)
- Previously, to execute a remove device command, checking for ongoing self-heal operations and waiting for them to be completed to ensure data consistency was mandatory. With this update, the remove device operation has been enhanced to check for ongoing self-heal and aborts with an error if any are found. (BZ#1432004)
- Previously, the volfile server list would contain just gluster pod IPs which host one or more bricks for a particular volume. With this update, all the servers present in a trusted pool are added to the volfile server of the volume. (BZ#1435530)
- Previously, new bricks that are added to volume as part of volume expansion would not have the right GID set and would lead to I/O failures. With this build, GID is set on all the new bricks. (BZ#1440900)
- Prior to this update, heketi performed 'gluster peer probe' operation only from the first node in the trusted pool. Hence, adding a new node failed if the first node of the pool was not reachable. With this fix, 'gluster peer probe' operation tries on the next online node if the first node in the trusted pool is not reachable. (BZ#1441675)
- Prior to this update, the fstab entry was missing in heketi.json file and as a result, any node reboots did not persist the mountpoints. With this fix, the cns-deploy build now contains the fstab entry with the mount paths updated. (BZ#1487514)
- Prior to this update, after expansion of a volume using heketi-cli, rebalance of the volume was not performed which led to write errors on files that resided on old bricks. With this fix, a rebalance operation is initiated on the volume after every expansion. (BZ#1477431)
- Previously, performing concurrent operations which refer same Gluster node crashed Heketi. With this fix, no crash is observed when multiple operations are performed referring to the same gluster node. (BZ#1480501)
- On upgrading heketi, creation of file volumes got disabled on existing clusters. With this fix, heketi does not disable file volume creation. (BZ#1497946)
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64
- BZ - 1349875 - [ RFE ] heketi-cli should support replacement of a failed node
- BZ - 1432004 - device remove should check there are no pending heals before proceeding with the brick replacement
- BZ - 1435238 - heketi is too sluggish, volume create takes more than 15 minutes
- BZ - 1435530 - [device remove]: heketi volume info doesn't reflect new volfile server when device remove has replaced a device from a new node
- BZ - 1440900 - [RFE] Support Volume expansion in Heketi
- BZ - 1441675 - adding node to cns may fail if one of the existing node is down
- BZ - 1443103 - [Scale Testing] Scaling of gluster volumes beyond 300 failed as the PVC's got stuck in Pending state indefinitely
- BZ - 1446069 - [RFE] Enable block volume creation in Heketi
- BZ - 1461647 - volume delete in heketi fails
- BZ - 1467318 - Getting proxy error while deploying heketi pod(pod restarted)
- BZ - 1468109 - provide a heketi image with auto_create_block_hosting_volume enabled by default
- BZ - 1468428 - All heketi command fails with error 'Failed to get list of pods'
- BZ - 1468882 - heketi allows to create block volumes exceeding the capacity of block hosting volume
- BZ - 1468952 - heketi block API dont respect "ha" count in the request
- BZ - 1468954 - heketi blockvolume delete fails to delete backend gluster block device but returns success
- BZ - 1468994 - heketi-cli blockvolume doesn't have "ha" option
- BZ - 1469360 - Initiators fail to detect gluster-block target
- BZ - 1472642 - Heketi : Node remove should wait if heal is in progress.
- BZ - 1473585 - typo in man heketi-cli
- BZ - 1477040 - ha count from Cli is not treated correctly in heketi block API
- BZ - 1477431 - After Volume Expansion is completed successfully getting write error inside the volume
- BZ - 1479749 - [brick-mux-cli]: Requesting for a clear warning on the recommendation during toggling of brick-mux option
- BZ - 1479777 - Return proper status to caller when volume delete is attempted.
- BZ - 1480123 - [RFE] set volume options via heketi-cli commands.
- BZ - 1480501 - heketi crashed when concurrent operations were performed
- BZ - 1485211 - auto block-hosting volume creation is disabled
- BZ - 1485779 - heketi should set group gluster-block option for block-hosting volume
- BZ - 1486578 - pre-alloc full option should be set by default for all gluster-block devices
- BZ - 1487514 - fstab entry is missing in heketi.json file
- BZ - 1487645 - block-volume creation fails when one of the node is down in a 3 node RHGS cluster
- BZ - 1490980 - Heketi pod stops working when beging restarted or deleted
- BZ - 1492533 - dynamic provisioning of block-volume creation fails
- BZ - 1497946 - after upgrade from cns 3.5 to 3.6, volume creation fails
Red Hat Gluster Storage Server for On-premise 3 for RHEL 7