- Issued:
- 2018-12-17
- Updated:
- 2018-12-17
RHBA-2018:3827 - Bug Fix Advisory
Synopsis
glusterfs bug fix update
Type/Severity
Bug Fix Advisory
Red Hat Insights patch analysis
Identify and remediate systems affected by this advisory.
Topic
Updated glusterfs packages that fix several bugs are now available for Red
Hat Gluster Storage 3.4 Update 2 on Red Hat Enterprise Linux 7.
Description
Red Hat Gluster Storage is a software only scale-out storage solution that provides flexible and affordable unstructured data storage. It unifies data storage and infrastructure, increases performance, and improves availability and manageability to meet enterprise-level storage challenges.
This advisory fixes the following bugs:
- The database profile for GlusterFS has been updated to include changes that improve the performance for PostgreSQL workloads. (BZ#1644120)
- When the performance.parallel-readdir option is enabled or disabled, the structure of services and subvolumes changes, which also changes the absolute name of the subvolume. When the absolute name changes, a new linkto file is created to store the new name, and the old linkto file is unlinked. In cases where the unlink operation on the old linkto file was attempted more than once, the unlink operation reported a failure rather than understanding that the unlink had already happened, and an I/O error was reported to the application. This kind of failure is now ignored, and applications no longer fail with I/O errors after this option is enabled or disabled. (BZ#1634649)
- Previously, if a directory was not synced to or deleted from the slave cluster, geo-rep process entered the 'Faulty' state and did not return until the parent directory was manually created with same GFID on the slave cluster. With this fix, geo-replication automatically creates the parent directory with same GFID as on the master cluster in case of ENOENT errors and geo-replication proceeds as expected. (BZ#1638069)
- Previously, if a symbolic link on master pointed to a read-only file system and the file owner changed, geo-replication went to ‘Faulty’ state with the error 'Read-only Filesystem' traceback. This is because geo-rep tries to sync ownership and permissions both metadata change. But permission change is not supported on symbolic link hence it used to affect target resulting in 'Read-only Filesystem'. With this fix, while syncing metadata of symbolic link, only ownership is synced and not permissions. Thus, geo-replication does not go to a faulty state. (BZ#1645916)
- When glusterd receives a disconnect event from glusterfsd process having a high number of brick instances attached, it takes a long time to process the entire disconnect event. This can lead glusterd to go into an unresponsive state resulting in incoming gluster commands to time out during this window. As a fix, enhance the brick disconnect code path to bring down the time complexity to process the disconnect event.Hence, no gluster cli command will time out. (BZ#1649651)
- After glusterd updates the volume files during an upgrade, it triggers a shutdown of the services. Previously, if this shutdown was triggered and glusterd received a peer connect request it would crash and a core dump was generated. This update fixes this issue by skipping the peer connect request when shutdown is in progress, and the crash no longer occurs. (BZ#1635071)
- When multiple clients access the same directory hierarchy after a replace or reset brick operation, any client accessing this hierarchy attempt name-heal and metadata-heal operations until the directory hierarchy is healed. Previously, clients paused until these operations were completed. This update fixes this issue by delegating metadata-heal and name-heal operations to the self-heal-daemon so that system unresponsiveness is eliminated. (BZ#1619357)
All users of Red Hat Gluster Storage are advised to upgrade to these
updated packages, which resolve these issues.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat Enterprise Linux Server 7 x86_64
- Red Hat Virtualization 4 for RHEL 7 x86_64
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64
Fixes
- BZ - 1402334 - Getting the warning message while erasing the gluster "glusterfs-server" package.
- BZ - 1479446 - Rebalance estimate(ETA) shows wrong details(as intial message of 10min wait reappears) when still in progress
- BZ - 1509888 - redhat-storage-server rpm package - improve shell coding style
- BZ - 1520882 - [GSS]shard files present even after deleting vm from the rhev
- BZ - 1579758 - Converting to replica 2 volume is not throwing warning
- BZ - 1599808 - Change default logrotate settings in RHGS
- BZ - 1603118 - [RHHi] Mount hung and not accessible
- BZ - 1622001 - dht: File rename removes the .glusterfs handle for linkto file
- BZ - 1622308 - Problem with SSL/TLS encryption on Gluster 4.0 & 4.1
- BZ - 1626350 - [Disperse] Don't send final version update if non data fop succeeded
- BZ - 1631166 - Provide indication at the console or in the logs about the progress being made with changelog processing.
- BZ - 1631418 - glusterd crash in regression build
- BZ - 1635100 - Correction for glusterd memory leak because use "gluster volume status volume_name --detail" continuously (cli)
- BZ - 1635136 - Seeing defunct translator and discrepancy in volume info when issued from node which doesn't host bricks in that volume
- BZ - 1636291 - [SNAPSHOT]: with brick multiplexing, snapshot restore will make glusterd send wrong volfile
- BZ - 1636957 - peer went into rejected state, if glusterd restarts while creating a volume
- BZ - 1637459 - volume status doesnt show bricks and shd from one node while it shows from other nodes
- BZ - 1642854 - glusterd: raise default transport.listen-backlog to 1024
- BZ - 1644120 - Update database profile settings for gluster
- BZ - 1644279 - Disperse volume 'df' usage is extremely incorrect after replace-brick.
- BZ - 1647675 - Rebalance could hang under certain circumstances
- BZ - 1648210 - Bumping up of op-version times out on a scaled system with ~1200 volumes
- BZ - 1653073 - default cluster.max-bricks-per-process to 250
- BZ - 1656924 - cluster.max-bricks-per-process 250 not working as expected
CVEs
(none)
References
(none)
Red Hat Enterprise Linux Server 7
SRPM | |
---|---|
glusterfs-3.12.2-32.el7.src.rpm | SHA-256: 8ff217fa34816e96e2b51ccdf2df4f6eecafabc1ca1a3d67367bdab6a574f044 |
x86_64 | |
glusterfs-3.12.2-32.el7.x86_64.rpm | SHA-256: c3f11a632f0f81d8e5f49192ed56b60dafaff96db7de828a8e56b4f09c0f1105 |
glusterfs-api-3.12.2-32.el7.x86_64.rpm | SHA-256: ee8b142372185f48a4939e5dab31e255bb8ab98343104c41e69b8e5d1db77055 |
glusterfs-api-devel-3.12.2-32.el7.x86_64.rpm | SHA-256: e98bad9f36c061a9d99fde28ace068b17902304566236311f05205883e8c8e1a |
glusterfs-cli-3.12.2-32.el7.x86_64.rpm | SHA-256: f7a83ed63cc5c3551bdf89f46ac7e9d5c3f3cd9e53692e0b2b3b2b2fba682910 |
glusterfs-client-xlators-3.12.2-32.el7.x86_64.rpm | SHA-256: 79937df02d1366323d5a53d565dcf38ef17040f16fa9d347a64a5857fd792737 |
glusterfs-debuginfo-3.12.2-32.el7.x86_64.rpm | SHA-256: dafff76291e971ee7dd42892e37e6256ad763721fa4e94607815060937923b32 |
glusterfs-devel-3.12.2-32.el7.x86_64.rpm | SHA-256: 2722433e870de6f6dec8a6073a05216234643fcfcbe5442a48c48834a0a7c9c5 |
glusterfs-fuse-3.12.2-32.el7.x86_64.rpm | SHA-256: 9227f13cb4329b9e43191913a8b85f0edc7467f6e9e20c998e6d0e1fe799babd |
glusterfs-libs-3.12.2-32.el7.x86_64.rpm | SHA-256: 38722843703443ac7cb46f8e21a354e083f4afc2d6d858948d796e0fb9c23a9c |
glusterfs-rdma-3.12.2-32.el7.x86_64.rpm | SHA-256: af8b062dd45cbd82a702419504c8697a70c2ad5b5b7e88a4abd77569d22d885e |
python2-gluster-3.12.2-32.el7.x86_64.rpm | SHA-256: 5caeba1bb734df0c2bc2018585a345ade47b38fedcb0e5dc12741da89bcc0537 |
Red Hat Virtualization 4 for RHEL 7
SRPM | |
---|---|
glusterfs-3.12.2-32.el7.src.rpm | SHA-256: 8ff217fa34816e96e2b51ccdf2df4f6eecafabc1ca1a3d67367bdab6a574f044 |
x86_64 | |
glusterfs-3.12.2-32.el7.x86_64.rpm | SHA-256: c3f11a632f0f81d8e5f49192ed56b60dafaff96db7de828a8e56b4f09c0f1105 |
glusterfs-api-3.12.2-32.el7.x86_64.rpm | SHA-256: ee8b142372185f48a4939e5dab31e255bb8ab98343104c41e69b8e5d1db77055 |
glusterfs-api-devel-3.12.2-32.el7.x86_64.rpm | SHA-256: e98bad9f36c061a9d99fde28ace068b17902304566236311f05205883e8c8e1a |
glusterfs-cli-3.12.2-32.el7.x86_64.rpm | SHA-256: f7a83ed63cc5c3551bdf89f46ac7e9d5c3f3cd9e53692e0b2b3b2b2fba682910 |
glusterfs-client-xlators-3.12.2-32.el7.x86_64.rpm | SHA-256: 79937df02d1366323d5a53d565dcf38ef17040f16fa9d347a64a5857fd792737 |
glusterfs-debuginfo-3.12.2-32.el7.x86_64.rpm | SHA-256: dafff76291e971ee7dd42892e37e6256ad763721fa4e94607815060937923b32 |
glusterfs-devel-3.12.2-32.el7.x86_64.rpm | SHA-256: 2722433e870de6f6dec8a6073a05216234643fcfcbe5442a48c48834a0a7c9c5 |
glusterfs-fuse-3.12.2-32.el7.x86_64.rpm | SHA-256: 9227f13cb4329b9e43191913a8b85f0edc7467f6e9e20c998e6d0e1fe799babd |
glusterfs-libs-3.12.2-32.el7.x86_64.rpm | SHA-256: 38722843703443ac7cb46f8e21a354e083f4afc2d6d858948d796e0fb9c23a9c |
glusterfs-rdma-3.12.2-32.el7.x86_64.rpm | SHA-256: af8b062dd45cbd82a702419504c8697a70c2ad5b5b7e88a4abd77569d22d885e |
Red Hat Gluster Storage Server for On-premise 3 for RHEL 7
SRPM | |
---|---|
glusterfs-3.12.2-32.el7rhgs.src.rpm | SHA-256: dfe0b497da20cff43882198b54322cf7405ed916f3318ed4b5432dff6c9aeb0d |
redhat-storage-server-3.4.2.0-1.el7rhgs.src.rpm | SHA-256: a2b4cefd3803072dfa2625951aa3bec154283672798c665ec3b6c787adda2521 |
x86_64 | |
glusterfs-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: 28c1b327c3aaa6271da472d3179ea74197d626bac61697065c23a7b04923fd6f |
glusterfs-api-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: 52e4e3a8f42a7d61d3c7f74b73a0de0985d182c71b134ddb4662e7428c1626f0 |
glusterfs-api-devel-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: f2decd67a087a409a5bbbab61f0342b0d77591c005cb5aa204ecf29844045e25 |
glusterfs-cli-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: a88f2696cb898dcc465e9140a197a9f54484a7ea590a04c36630a7267864aaef |
glusterfs-client-xlators-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: a5eb62f450e0b6454197cf64a01bb6447312f56b7261b46ebee0ef66ee9ec1d9 |
glusterfs-debuginfo-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: 08540b94abbc8557d54aade658a73350aae193c3d851c32ff38cf906fe6f0b58 |
glusterfs-devel-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: 08f9a2f40a30d0f5a170ad9466dc1fe0c9a92458ce593c95c118cf30240f059c |
glusterfs-events-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: f7abbeb5202685884b06009f57d2ead42a68fc3609e35171ed4fc7b88cad51d1 |
glusterfs-fuse-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: a03ef639843a98a8fdd54a3a61c151dc0cc64c77d2d23dbd4b7fcad2e42fc4ed |
glusterfs-ganesha-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: 40f00033487c15d175670533c823a6685dfeb6194b858948dd7a77fed39f7528 |
glusterfs-geo-replication-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: 8e253a3ed77b01fde3de7b76b152c423a0e38fe2598532ca725c46a53005b498 |
glusterfs-libs-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: dee954ff94f89a61a67fbf55613768bc34a78188b93c59750dcaa9e0d7260e8b |
glusterfs-rdma-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: c42d2b91fa1b4cc4e33e19ff9a1f6ef9c970bb73a155d8d2c407883bcd194b9d |
glusterfs-resource-agents-3.12.2-32.el7rhgs.noarch.rpm | SHA-256: 18394a7044bc946f764d59fbd739a9ae5a54c2f27b08ab65a9703d97433da4a8 |
glusterfs-server-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: 2156d4678f702d26e85c702f9f1ed4159de7b4e199420b53d241e176ce1f232e |
python2-gluster-3.12.2-32.el7rhgs.x86_64.rpm | SHA-256: a3ed3e00b35fef17fd63d42a4ed61b630db76c80e89796b2ae461ca5ac46552b |
redhat-storage-server-3.4.2.0-1.el7rhgs.noarch.rpm | SHA-256: 42e66b9b212d46b7a2c1bc712020d0c9077e0dd2322027deab4328a4d11e5897 |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.