Skip to navigation Skip to main content

Utilities

  • Subscriptions
  • Downloads
  • Containers
  • Support Cases
Red Hat Customer Portal
  • Subscriptions
  • Downloads
  • Containers
  • Support Cases
  • Products & Services

    Products

    Support

    • Production Support
    • Development Support
    • Product Life Cycles

    Services

    • Consulting
    • Technical Account Management
    • Training & Certifications

    Documentation

    • Red Hat Enterprise Linux
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenStack Platform
    • Red Hat OpenShift Container Platform
    All Documentation

    Ecosystem Catalog

    • Red Hat Partner Ecosystem
    • Partner Resources
  • Tools

    Tools

    • Troubleshoot a product issue
    • Packages
    • Errata

    Customer Portal Labs

    • Configuration
    • Deployment
    • Security
    • Troubleshoot
    All labs

    Red Hat Insights

    Increase visibility into IT operations to detect and resolve technical issues before they impact your business.

    Learn More
    Go to Insights
  • Security

    Red Hat Product Security Center

    Engage with our Red Hat Product Security team, access security updates, and ensure your environments are not exposed to any known security vulnerabilities.

    Product Security Center

    Security Updates

    • Security Advisories
    • Red Hat CVE Database
    • Security Labs

    Keep your systems secure with Red Hat's specialized responses to security vulnerabilities.

    View Responses

    Resources

    • Security Blog
    • Security Measurement
    • Severity Ratings
    • Backporting Policies
    • Product Signing (GPG) Keys
  • Community

    Customer Portal Community

    • Discussions
    • Private Groups
    Community Activity

    Customer Events

    • Red Hat Convergence
    • Red Hat Summit

    Stories

    • Red Hat Subscription Value
    • You Asked. We Acted.
    • Open Source Communities
Or troubleshoot an issue.

Select Your Language

  • English
  • 한국어
  • 日本語
  • 中文 (中国)

Infrastructure and Management

  • Red Hat Enterprise Linux
  • Red Hat Virtualization
  • Red Hat Identity Management
  • Red Hat Directory Server
  • Red Hat Certificate System
  • Red Hat Satellite
  • Red Hat Subscription Management
  • Red Hat Update Infrastructure
  • Red Hat Insights
  • Red Hat Ansible Automation Platform

Cloud Computing

  • Red Hat OpenShift
  • Red Hat CloudForms
  • Red Hat OpenStack Platform
  • Red Hat OpenShift Container Platform
  • Red Hat OpenShift Data Science
  • Red Hat OpenShift Online
  • Red Hat OpenShift Dedicated
  • Red Hat Advanced Cluster Security for Kubernetes
  • Red Hat Advanced Cluster Management for Kubernetes
  • Red Hat Quay
  • Red Hat CodeReady Workspaces
  • Red Hat OpenShift Service on AWS

Storage

  • Red Hat Gluster Storage
  • Red Hat Hyperconverged Infrastructure
  • Red Hat Ceph Storage
  • Red Hat OpenShift Data Foundation

Runtimes

  • Red Hat Runtimes
  • Red Hat JBoss Enterprise Application Platform
  • Red Hat Data Grid
  • Red Hat JBoss Web Server
  • Red Hat Single Sign On
  • Red Hat support for Spring Boot
  • Red Hat build of Node.js
  • Red Hat build of Thorntail
  • Red Hat build of Eclipse Vert.x
  • Red Hat build of OpenJDK
  • Red Hat build of Quarkus

Integration and Automation

  • Red Hat Integration
  • Red Hat Fuse
  • Red Hat AMQ
  • Red Hat 3scale API Management
  • Red Hat JBoss Data Virtualization
  • Red Hat Process Automation
  • Red Hat Process Automation Manager
  • Red Hat Decision Manager
All Products
Red Hat Product Errata RHBA-2017:2774 - Bug Fix Advisory
Issued:
2017-09-21
Updated:
2017-09-21

RHBA-2017:2774 - Bug Fix Advisory

  • Overview
  • Updated Packages

Synopsis

glusterfs bug fix and enhancement update

Type/Severity

Bug Fix Advisory

Red Hat Insights patch analysis

Identify and remediate systems affected by this advisory.

View affected systems

Topic

Updated glusterfs packages that fix several bugs and add various enhancements are now available.

Description

Red Hat Gluster Storage is a software only scale-out storage solution
that provides flexible and affordable unstructured data storage.
It unifies data storage and infrastructure, increases performance,
and improves availability and manageability to meet enterprise-level
storage challenges.

This update fixes a number of issues and adds a number of enhancements.
Space precludes documenting all of these changes in this advisory.
Users are directed to the Red Hat Gluster Storage 3.3 Release Notes
for information about the most significant changes:

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/3.3_release_notes/

All users of glusterfs are advised to upgrade to these updated packages,
which provide numerous bug fixes and enhancements.

Solution

Before applying this update, make sure all previously released errata
relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Enterprise Linux Server 7 x86_64
  • Red Hat Virtualization 4 for RHEL 7 x86_64
  • Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64

Fixes

  • BZ - 843838 - [RFE] Enhancing geo-replication status to show details without inclusion of master/slave volume in the input cli
  • BZ - 1115951 - [USS]: one of the snapshot log file is filled with numerous entries of "failed to do lookup and get the handle on the snapshot (null)" message
  • BZ - 1129468 - [RFE] Add a count of snapshots associated with a volume to the output of the vol info command
  • BZ - 1165648 - [USS]: If glusterd goes down on the originator node while snapshots are activated, after glusterd comes back up, accessing .snaps do not list any snapshots even if they are present
  • BZ - 1167252 - [rfe] glusterfs snapshot cli commands should provide xml output.
  • BZ - 1247056 - [SELinux] [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
  • BZ - 1260779 - Value of `replica.split-brain-status' attribute of a directory in metadata split-brain in a dist-rep volume reads that it is not in split-brain
  • BZ - 1285170 - glusterd: cli is showing command success for rebalance commands(command which uses op_sm framework) even though staging is failed in follower node.
  • BZ - 1291617 - Multiple crashes observed during "qr_lookup_cbk" and "qr_readv" on slave side of geo-replication setup
  • BZ - 1291974 - [GlusterD]: Bricks are in offline state after node reboot
  • BZ - 1297743 - [perf-xlators/write-behind] write-behind-window-size could be set greater than its allowed MAX value 1073741824
  • BZ - 1298258 - [RFE] Bit-rot lgetxattrs calls on volume with bit-rot disabled
  • BZ - 1309209 - clone creation with older names in a system fails
  • BZ - 1315583 - [tiering]: gluster v reset of watermark levels can allow low watermark level to have a higher value than hi watermark level
  • BZ - 1315781 - AFR returns the node uuid of the same node for every file in the replica
  • BZ - 1323928 - when server-quorum is enabled, volume get returns 0 value for server-quorum-ratio
  • BZ - 1326183 - warning messages seen in glusterd logs while setting the volume option
  • BZ - 1327045 - [geo-rep]: rsync should not try to sync internal xattrs
  • BZ - 1333658 - Fresh lookup for directories must not take place when the redundant #bricks are got down in a disperse volume
  • BZ - 1335090 - Shared volume doesn't get mounted on few nodes after rebooting all nodes in cluster.
  • BZ - 1351185 - [GSS] - RFE: Add info on op-version for clients in vol status output in order to ease client and server version compatibility
  • BZ - 1359613 - [RFE] Geo-replication Logging Improvements
  • BZ - 1360317 - [GSS] glusterfs doesn't respect cluster.min-free-disk on remove-brick operation
  • BZ - 1370027 - RFE: Support to update NFS-Ganesha export options dynamically
  • BZ - 1372283 - [GSS] NFS Sub-directory mount not working on solaris10 client
  • BZ - 1378085 - Unable to take Statedump for gfapi applications
  • BZ - 1380598 - RFE: An administrator friendly way to determine rebalance completion time
  • BZ - 1381142 - The rebal-throttle setting does not work as expected
  • BZ - 1381158 - CLI option "--timeout" is accepting non numeric and negative values.
  • BZ - 1381825 - glusterd restart is starting the offline shd daemon on other node in the cluster
  • BZ - 1383979 - Getting error messages in glusterd.log when peer detach is done
  • BZ - 1385589 - [geo-rep]: Worker crashes seen while renaming directories in loop
  • BZ - 1387328 - Log message shows error code as success even when rpc fails to connect
  • BZ - 1389678 - Getting continuous info messages when the mounted volume is stopped in SSL setup
  • BZ - 1395989 - nfs-ganesha volume export file remains stale in shared_storage_volume when volume is deleted
  • BZ - 1396010 - [Disperse] healing should not start if only data bricks are UP
  • BZ - 1400816 - [GANESHA] Symlinks from /etc/ganesha/ganesha.conf to shared_storage are created on the non-ganesha nodes in 8 node gluster having 4 node ganesha cluster
  • BZ - 1405910 - Need a way to reduce the logging of messages "Peer CN" and "SSL verification suceeded messages" in glusterd.log file
  • BZ - 1408361 - Warning messages throwing when EC volume offline brick comes up are difficult to understand for end user.
  • BZ - 1409474 - [Remove-brick] Hardlink migration fails with "lookup failed (No such file or directory)" error messages in rebalance logs
  • BZ - 1411344 - [GNFS] GNFS crashed while taking lock on a file from 2 different clients having same volume mounted from 2 different servers
  • BZ - 1411352 - rename of the same file from multiple clients with caching enabled may result in duplicate files
  • BZ - 1412930 - [SSL] - when a node or glusternw is down all the clients logs are flooded with SSL connect error and client setup failed messages
  • BZ - 1414410 - [NFS-Ganesha] Ganesha service crashed while doing refresh config on volume and when IOs are running.
  • BZ - 1414750 - [Geo-rep] Slave mount log file is cluttered by logs of multiple active mounts
  • BZ - 1414758 - quota: limit-usage command failed with error " Failed to start aux mount"
  • BZ - 1415038 - [Ganesha + EC] : Input/Output Error while creating LOTS of smallfiles
  • BZ - 1415178 - systemic testing: seeing lot of ping time outs which would lead to splitbrains
  • BZ - 1416024 - Unable to take snapshot on a geo-replicated volume, even after stopping the session
  • BZ - 1417062 - [Stress] : Input/Output Error on EC over gNFS while creating LOTS of smallfiles
  • BZ - 1417815 - [RFE] Support multiple bricks in one process (multiplexing)
  • BZ - 1418227 - Quota: After deleting directory from mount point on which quota was configured, quota list command output is blank
  • BZ - 1419816 - [Nfs-ganesha] Refresh config fails when ganesha cluster is in failover mode.
  • BZ - 1420796 - limited throughput with disperse volume over small number of bricks
  • BZ - 1421639 - [GSS] Resources limits for core is set for every user which breaks further changes
  • BZ - 1422153 - [NFS-Ganesha]Failed to unexport volume when ganesha cluster is in failover state.
  • BZ - 1424680 - Restarting FUSE causes previous FUSE mounts to be in a bad state.
  • BZ - 1425684 - Worker crashes with EINVAL errors
  • BZ - 1425690 - Worker restarts on log-rsync-performance config update
  • BZ - 1425695 - [Geo-rep] If for some reason MKDIR failed to sync, it should not proceed further.
  • BZ - 1425697 - dht_setxattr returns EINVAL when a file is deleted during the FOP
  • BZ - 1426034 - Add logs to identify whether disconnects are voluntary or due to network problems
  • BZ - 1426326 - [Ganesha] : Add comment to Ganesha HA config file ,about cluster name's length limitation
  • BZ - 1426502 - [RFE] redhat-storage-server build for RHGS 3.3.0
  • BZ - 1426523 - [GSS] Stale export entries in ganesha.conf after executing "gluster nfs-ganesha disable"
  • BZ - 1426950 - [RFE] capture portmap details in glusterd's statedump
  • BZ - 1426952 - [geo-rep]: extended attributes are not synced if the entry and extended attributes are done within changelog roleover/or entry sync
  • BZ - 1427096 - [RFE] [Samba] Better readdir performance with parallel readdir feature
  • BZ - 1427099 - [RFE] [Samba] Implement Negative lookup cache feature to improve create performance
  • BZ - 1427159 - possible repeatedly recursive healing of same file with background heal not happening when IO is going on
  • BZ - 1427452 - nfs-ganesha: Incorrect error message returned when disable fails
  • BZ - 1427870 - [geo-rep]: Worker crashes with [Errno 16] Device or resource busy: '.gfid/00000000-0000-0000-0000-000000000001/dir.166 while renaming directories
  • BZ - 1427958 - [GSS] Error 0-socket.management: socket_poller XX.XX.XX.XX:YYY failed (Input/output error) during any volume operation
  • BZ - 1428936 - [GSS]Remove-brick operation is slow in a distribute-replicate volume in RHGS 3.1.3
  • BZ - 1433276 - glusterd crashes when peering an IP where the address is more than acceptable range (>255) OR with random hostnames
  • BZ - 1433751 - [RFE] 'gluster volume get' should implement the way to retrieve volume options using the volume name 'all'
  • BZ - 1434653 - Application VMs with their disk images on sharded-replica 3 volume are unable to boot after performing rebalance
  • BZ - 1435357 - [GSS]RHGS 3.1.3 glusterfs client crash on io-cache.so(__ioc_page_wakeup+0x44)
  • BZ - 1435587 - [GSS]geo-replication faulty
  • BZ - 1435656 - Disperse: Provide description of disperse.eager-lock option.
  • BZ - 1436156 - [RHEL7] Updated glusterfs build for RHGS 3.3.0
  • BZ - 1437332 - auth failure after upgrade to GlusterFS 3.10
  • BZ - 1437773 - Undo pending xattrs only on the up bricks
  • BZ - 1437782 - Unable to mount with latest gluster builds 3.8.4-19
  • BZ - 1437940 - Brick Multiplexing: Core dumped when brick multiplex is enabled
  • BZ - 1437957 - Brick Multiplexing: Glusterd crashed when stopping volumes
  • BZ - 1437960 - [Parallel-readdirp] Rename of files results in duplicate files in mount point as we enable parallel readdirp for the gluster volume
  • BZ - 1438051 - Brick Multiplexing:Volume status still shows the PID even after killing the process
  • BZ - 1438052 - Glusterd crashes when restarted with many volumes
  • BZ - 1438245 - [Parallel Readdir] : No bound-checks/CLI validation for parallel readdir tunables
  • BZ - 1438378 - [Snapshot]Create snapshot fails with error saying snapshot command failed
  • BZ - 1438468 - [Parallel Readdir]: Hard link creation creates two target links on mount point (EC/SMB)
  • BZ - 1438706 - Sharding: Fix a performance bug
  • BZ - 1438712 - [Parallel Readdir] Client Logs is filled with error messages saying readdir-filter-directories failed
  • BZ - 1438820 - [Parallel Readdir] : Simultaneous rename of the same file to a different name from different clients succeeds,and all "new" renamed files are visible on the mount.
  • BZ - 1439039 - [Parallel Readdir] : Bonnie++ fails,complains about getting lesser number of files than expected.
  • BZ - 1439250 - [Parallel Readdir] : Reads fail during dbench
  • BZ - 1439708 - [geo-rep]: Geo-replication goes to faulty after upgrade from 3.2.0 to 3.3.0
  • BZ - 1440638 - [RFE]: Testing effort tracker for new EC variants 8+2 and 16+4
  • BZ - 1440699 - Upgrade of RHGS 3.2 to RHGS 3.3 fails with dependency error of libldb when rhgs-samba channel is not added
  • BZ - 1441055 - pacemaker service is disabled after creating nfs-ganesha cluster.
  • BZ - 1441280 - [snapshot cifs]ls on .snaps directory is throwing input/output error over cifs mount
  • BZ - 1441783 - [GANESHA] Adding a node to existing cluster failed to start pacemaker service on new node
  • BZ - 1441932 - Gluster operations fails with another transaction in progress as volume delete acquires lock and won't release
  • BZ - 1441942 - [Eventing]: Unrelated error message displayed when path specified during a 'webhook-test/add' is missing a schema
  • BZ - 1441946 - Brick Multiplexing: volume status showing "Another transaction is in progress"
  • BZ - 1441951 - [Parallel Readdir]:Seeing linkto files on FUSE mount after rebalance
  • BZ - 1441992 - ls hang seen on an existing mount point (3.2 client) when the server is upgraded and parallel readdir is enabled
  • BZ - 1442026 - [Parallel Readdir] When parallel readdir is enabled, linked to file resolution fails
  • BZ - 1442787 - Brick Multiplexing: During Remove brick when glusterd of a node is stopped, the brick process gets disconnected from glusterd purview and hence losing multiplexing feature
  • BZ - 1442943 - dht/rebalance: Increase maximum read block size from 128 KB to 1 MB
  • BZ - 1443051 - Brick Multiplexing: Unable to activate Snapshot
  • BZ - 1443123 - [BrickMultiplex] gluster command not responding and .snaps directory is not visible after executing snapshot related command
  • BZ - 1443843 - Brick Multiplexing :- resetting a brick bring down other bricks with same PID
  • BZ - 1443884 - iscsi and iscsid services should not be disabled
  • BZ - 1443939 - Brick Multiplexing :- .trashcan not able to heal after replace brick
  • BZ - 1443941 - Brick Multiplexing: seeing Input/Output Error for .trashcan
  • BZ - 1443950 - [Brick Multiplexing] "cluster.brick-multiplex" volume set option should only take boolean value as input
  • BZ - 1443961 - [Brick Multiplexing]: Glusterd crashed when volume force started after disabling brick multiplex
  • BZ - 1443972 - [Brick Multiplexing] : Bricks for multiple volumes going down after glusterd restart and not coming back up after volume start force
  • BZ - 1443980 - [New] - Replacing an arbiter brick while I/O happens causes vm pause
  • BZ - 1443990 - [GANESHA] Volume start and stop having ganesha enable on it,turns off cache-invalidation on volume
  • BZ - 1443991 - [Brick Multiplexing] Brick process on a node didn't come up after glusterd stop/start
  • BZ - 1444086 - Brick Multiplexing:Different brick processes pointing to same socket, process file and volfile-id must not lead to IO loss when one of the volume is down
  • BZ - 1444515 - [GNFS+EC] Unable to release the lock when the other client tries to acquire the lock on the same file
  • BZ - 1444790 - [Brick Multiplexing] : cluster.brick-multiplex has no description.
  • BZ - 1444861 - Brick Multiplexing: bricks of volume going offline possibly because the brick PID is associated with another volume which was brought down
  • BZ - 1444926 - Brick Multiplexing: creating a volume with same base name and base brick after it was deleted brings down all the bricks associated with the same brick process
  • BZ - 1445145 - 186.pem certificate still shows incorrect version and tag as 3.2.0 on RHEL6 and RHEL7
  • BZ - 1445246 - [Parallel Readdir] : Mounts fail when performance.parallel-readdir is set to "off"
  • BZ - 1445251 - [Parallel Readdir] : Unable to re-enable parallel readdir after disabling it if rda cache limit is > 1GB
  • BZ - 1445570 - Provide a correct way to save the statedump generated by gfapi application
  • BZ - 1446107 - [Brick MUX] : Rebalance fails.
  • BZ - 1446165 - Seeing error "Failed to get the total number of files. Unable to estimate time to complete rebalance" in rebalance logs
  • BZ - 1446645 - [GANESHA] Glusterd crashed while deleting volume on which ganesha was enabled
  • BZ - 1447559 - Seeing Input/Output error in rebalance logs during a rebalance on an ec volume(log level=Debug)
  • BZ - 1447920 - [Brick MUX]: Tier daemons in failed state on a setup where brick-multiplexing was on-and-put-off-later
  • BZ - 1447929 - [Tiering]: High and low watermark values when set to the same level, is allowed
  • BZ - 1447959 - Mismatch in checksum of the image file after copying to a new image file
  • BZ - 1448386 - [geo-rep]: Worker crashed with TypeError: expected string or buffer
  • BZ - 1448434 - [GANESHA] posix compliance test failures
  • BZ - 1448833 - [Brick Multiplexing] heal info shows the status of the bricks as "Transport endpoint is not connected" though bricks are up
  • BZ - 1449226 - [gluster-block]:Need a volume group profile option for gluster-block volume to add necessary options to be added.
  • BZ - 1449593 - When either killing or restarting a brick with performance.stat-prefetch on, stat sometimes returns a bad st_size value.
  • BZ - 1449684 - rchecksum is not shown in profile info
  • BZ - 1450004 - [nl-cache] Remove-brick rebalance on a node failed; log says "Readdirp failed. Aborting data migration for directory: / [File descriptor in bad state]"
  • BZ - 1450080 - [Negative Lookup]: negative lookup features doesn't seem to work on restart of volume
  • BZ - 1450330 - [nl-cache]: negative lookup cache limit not honoured
  • BZ - 1450336 - [NegativeLookup] Samba crash when continuous mount and unmount is done along with negative look up
  • BZ - 1450341 - [Negative Lookup] Remove the max limit for nl-cache-limit and nl-cache-timeout
  • BZ - 1450722 - [RFE] glusterfind: add --end-time and --field-separator options
  • BZ - 1450806 - Brick Multiplexing: Brick process shows as online in vol status even when brick is offline
  • BZ - 1450807 - Brick Multiplexing:heal info shows root as heal pending for all associated volumes, after bringing down brick of one volume
  • BZ - 1450813 - Brick Multiplexing: heal info shows brick as online even when it is brought down
  • BZ - 1450830 - [Perf] 35% drop in small file creates on smbv3 on *2
  • BZ - 1450838 - [Perf] 34% drop in small file reads on smbv3 on *2
  • BZ - 1450841 - [Perf] 17% drop in small file "delete-rename" on smbv3 on *2
  • BZ - 1450845 - [Perf] 26% & 15% drop in rmdir & mkdir on smbv3 on *2
  • BZ - 1450848 - [Perf] 16% drop in renames on smbv3 on *2
  • BZ - 1450889 - Brick Multiplexing: On reboot of a node Brick multiplexing feature lost on that node as multiple brick processes get spawned
  • BZ - 1450898 - [Perf] Huge drop in metadata workload on smbv3 on *2
  • BZ - 1450904 - [geo-rep + nl]: Multiple crashes observed on slave with "nlc_lookup_cbk"
  • BZ - 1451086 - crash in dht_rmdir_do
  • BZ - 1451224 - [GNFS] NFS Sub directory is getting mounted on solaris 10 even when the permission is restricted in nfs.export-dir volume option
  • BZ - 1451280 - [Bitrot]: Brick process crash observed while trying to recover a bad file in disperse volume
  • BZ - 1451598 - Brick Multiplexing: Deleting brick directories of the base volume must gracefully detach from glusterfsd without impacting other volumes IO(currently seeing transport end point error)
  • BZ - 1451602 - Brick Multiplexing:Even clean Deleting of the brick directories of base volume is resulting in posix health check errors(just as we see in ungraceful delete methods)
  • BZ - 1451756 - client fails to connect to the brick due to an incorrect port reported back by glusterd
  • BZ - 1452083 - [Ganesha] : Stale linkto files after unsuccessfuly hardlinks
  • BZ - 1452205 - glusterd on a node crashed after running volume profile command
  • BZ - 1452513 - [Stress] : Client process crashed during finds/rm from a single client.
  • BZ - 1452528 - rebalance: fix space check for migration for rebalance
  • BZ - 1453049 - [DHt] : segfault in dht_selfheal_dir_setattr while running regressions
  • BZ - 1453145 - Brick Multiplexing:dmesg shows request_sock_TCP: Possible SYN flooding on port 49152
  • BZ - 1453160 - [Bitrot]: Multiple bricks crash seen on a cifs setup with nl-cache and II-readdirp enabled volume
  • BZ - 1454313 - gluster-block is not working as expected when shard is enabled
  • BZ - 1454416 - [Stress] : Client process crashed while removing files.
  • BZ - 1454558 - qemu_gluster_co_get_block_status gets SIGABRT when doing blockcommit continually
  • BZ - 1454596 - [Bitrot]: Inconsistency seen with 'scrub ondemand' - fails to trigger scrub
  • BZ - 1454602 - Rebalance estimate time sometimes shows negative values
  • BZ - 1454689 - "split-brain observed [Input/output error]" error messages in samba logs during parallel rm -rf
  • BZ - 1455022 - [GSS] Use of force with volume start, creates brick directory even it is not present
  • BZ - 1455241 - [Scale] : Rebalance start force is skipping files.
  • BZ - 1456402 - [io-threads] : io-threads gets loaded for newly created replica volumes on RHGS 3.3.0.
  • BZ - 1456831 - [RHEL7] product certificate update for RHGS 3.3.0 for RHEL 7.4
  • BZ - 1457179 - [Ganesha] : Grace period is not being adhered to on RHEL 7.4; Clients continue running IO even during grace.
  • BZ - 1457183 - [Ganesha] : Ganesha crashes while cluster enters failover/failback mode and during basic IO with the same BT.
  • BZ - 1457713 - Permission denied errors when appending files after readdir
  • BZ - 1457731 - [Scale] : Rebalance ETA (towards the end) may be inaccurate,even on a moderately large data set.
  • BZ - 1457936 - possible memory leak in glusterfsd with multiplexing
  • BZ - 1458569 - Regression test for add-brick failing with brick multiplexing enabled
  • BZ - 1458585 - add all as volume option in gluster volume get usage
  • BZ - 1459400 - brick process crashes while running bug-1432542-mpx-restart-crash.t in a loop
  • BZ - 1459756 - [NegativeLookUp Cache] "nl-cache.c:201:nlc_lookup_cbk" Error messages are seen in client logs after any IO is performed
  • BZ - 1459790 - [Snapshot].snaps directory is not visible in windows client
  • BZ - 1459900 - Brick Multiplexing:Not cleaning up stale socket file is resulting in spamming glusterd logs with warnings of "got disconnect from stale rpc"
  • BZ - 1459972 - posix-acl: Whitelist virtual ACL xattrs
  • BZ - 1460098 - [Negative Lookup Cache]Need a single group set command for enabling all required nl cache options
  • BZ - 1460231 - snapshot: snapshot status command shows brick running as yes though snapshot is deactivated
  • BZ - 1460936 - [Scale] : Rebalance ETA shows the initial estimate to be ~140 days,finishes within 18 hours though.
  • BZ - 1461098 - [Ganesha] Ganesha service failed to start on new node added in existing ganeshacluster
  • BZ - 1461543 - [Ganesha] : Ganesha occupies increased memory (RES size) even when all the files are deleted from the mount
  • BZ - 1461649 - glusterd crashes when statedump is taken
  • BZ - 1462066 - Dict_t leak in dht_migration_complete_check_task and dht_rebalance_inprogress_task
  • BZ - 1462687 - [Geo-rep]: Worker stuck in loop on trying to sync a directory
  • BZ - 1462693 - with AFR now making both nodes to return UUID for a file will result in georep consuming more resources
  • BZ - 1462753 - glusterfind: syntax error due to uninitialized variable 'end'
  • BZ - 1462773 - [Ganesha]Bricks got crashed while running posix compliance test suit on V4 mount
  • BZ - 1463104 - lk fop succeeds even when lock is not acquired on at least quorum number of bricks
  • BZ - 1463108 - Regression: Heal info takes longer time when a brick is down
  • BZ - 1463221 - cns-brick-multiplexing: brick process fails to restart after gluster pod failure
  • BZ - 1463907 - Application VMs, wth the disk images on replica 3 volume, paused post rebalance
  • BZ - 1464336 - selfheal deamon cpu consumption not reducing when IOs are going on and all redundant bricks are brought down one after another
  • BZ - 1464453 - Fuse mount crashed with continuous dd on a file and reading the file in parallel
  • BZ - 1464727 - glusterd: crash on parsing /proc/<pid>/cmdline after node restart after ungraceful shutdown
  • BZ - 1465011 - glusterfind: DELETE path needs to be unquoted before further processing
  • BZ - 1465638 - RHGS 3.1.3 to 3.2 Upgrade - SegFault
  • BZ - 1466144 - [GANESHA] Ganesha setup creation fails due to selinux blocking some services required for setup creation
  • BZ - 1466321 - dht_rename_lock_cbk crashes in upstream regression test
  • BZ - 1466608 - multiple brick processes seen on gluster(fs)d restart in brick multiplexing
  • BZ - 1467621 - build: make gf_attach available in glusterfs-server
  • BZ - 1467807 - gluster volume status --xml fails when there are 100 volumes
  • BZ - 1468186 - [Geo-rep]: entry failed to sync to slave with ENOENT errror
  • BZ - 1468484 - Enable stat-prefetch in virt profile
  • BZ - 1468514 - Brick Mux Setup: brick processes(glusterfsd) crash after a restart of volume which was preceded with some actions
  • BZ - 1468950 - [RFE] Have a global option to set per node limit to the number of multiplexed brick processes
  • BZ - 1469041 - Rebalance hangs on remove-brick if the target volume changes
  • BZ - 1469971 - cluster/dht: Fix hardlink migration failures
  • BZ - 1471918 - [distribute] crashes seen upon rmdirs
  • BZ - 1472129 - Brick Multiplexing: Brick process crashed at changetimerecorder(ctr) translator when restarting volumes
  • BZ - 1472273 - noticing WriteTimeoutException with Cassandra write query
  • BZ - 1472289 - No clear method to multiplex all bricks to one process(glusterfsd) with cluster.max-bricks-per-process option
  • BZ - 1472604 - [geo-rep]: RMDIR at master causing worker crash
  • BZ - 1472764 - [EC]: md5sum mismatches every time for a file from the fuse client on EC volume
  • BZ - 1472773 - [GNFS] GNFS got crashed while mounting volume on solaris client
  • BZ - 1473229 - [Scale] : Client logs flooded with "inode context is NULL" error messages
  • BZ - 1473259 - [Scale] : Rebalance Logs are bulky.
  • BZ - 1473327 - Brick Multiplexing: Seeing stale brick process when all gluster processes are stopped and then started with glusterd
  • BZ - 1474284 - dht remove-brick status does not indicate failures for files not migrated because of a lack of space
  • BZ - 1474380 - [geo-rep]: few of the self healed hardlinks on master did not sync to slave
  • BZ - 1474812 - [Remove-brick] Few files are getting migrated eventhough the bricks crossed cluster.min-free-disk value
  • BZ - 1475136 - [Perf] : Large file sequential reads are off target by ~38% on FUSE/Ganesha
  • BZ - 1475176 - [Perf] Random Reads regressed by 42% on FUSE
  • BZ - 1476556 - [geo-rep]: Worker crash during rmdir with "NameError: global name 'lf' is not defined"
  • BZ - 1476867 - packaging: /var/lib/glusterd/options should be %config(noreplace)
  • BZ - 1476871 - [NFS] nfs process crashed in "nfs3_getattr"
  • BZ - 1477024 - when gluster pod is restarted, bricks from the restarted pod fails to connect to fuse, self-heal etc
  • BZ - 1477668 - Cleanup retired mem-pool allocations
  • BZ - 1478136 - [GNFS] gnfs crashed at nfs3_lookup while subdir mount on solaris client
  • BZ - 1478716 - [Scale] : I/O errors on multiple gNFS mounts with "Stale file handle" during rebalance of an erasure coded volume.
  • BZ - 1479710 - [brick-mux-cli]: Requesting for a clear warning on the recommendation during toggling of brick-mux option
  • BZ - 1480423 - Gluster Bricks are not coming up after pod restart when bmux is ON
  • BZ - 1481392 - libgfapi: memory leak in glfs_h_acl_get
  • BZ - 1483956 - [rpc]: EPOLLERR - disconnecting now messages every 3 secs after completing rebalance
  • BZ - 1486115 - gluster-block profile needs to have strict-o-direct
  • BZ - 1488152 - gluster-blockd process crashed and core generated

CVEs

(none)

References

(none)

Note: More recent versions of these packages may be available. Click a package name for more details.

Red Hat Enterprise Linux Server 7

SRPM
glusterfs-3.8.4-44.el7.src.rpm SHA-256: ac9d971c82684060acc1ebaaf5f9efbf1dc5fe86442643fcb87db8de1236ac9a
x86_64
glusterfs-3.8.4-44.el7.x86_64.rpm SHA-256: 43ce9cfdd8e0d71ff4fa3f59f752c2cdbb52f251d7bb0364e041ea32e588ad8f
glusterfs-api-3.8.4-44.el7.x86_64.rpm SHA-256: 5d9cfd2eb8b07137ddcc131b2d82746790e668a7eae85e8edfb0b1e4e0977118
glusterfs-api-devel-3.8.4-44.el7.x86_64.rpm SHA-256: 5b7bb25d4aa3edb8b3a3c55fb7e201232018f854c859253437c98cb0bce16a0d
glusterfs-cli-3.8.4-44.el7.x86_64.rpm SHA-256: 24771ac6e647e81886f5d6e43ac3aa206135ef47cb9880e0163914d3dba32bc8
glusterfs-client-xlators-3.8.4-44.el7.x86_64.rpm SHA-256: 216fa669ce920d8ab7edba5c3639c05f8561c2e0aeb1f0484edc1f9445611f03
glusterfs-debuginfo-3.8.4-44.el7.x86_64.rpm SHA-256: 22b1b036847384ffea131e51406a1ecf9d748c682379a04379d211e78d0b2755
glusterfs-devel-3.8.4-44.el7.x86_64.rpm SHA-256: f84bfbeb6be692f1e3116c33e2012bfdac2b156fe8814e0c556f25bcde4aac4a
glusterfs-fuse-3.8.4-44.el7.x86_64.rpm SHA-256: bbad395c63ef8baf52083009b89fa50432ef6cfa0bb7b6c6ee6be4d00b8e5181
glusterfs-libs-3.8.4-44.el7.x86_64.rpm SHA-256: 6b7dee529de634bebe1027e55f141224371fb55396461814f1af04ccdb611497
glusterfs-rdma-3.8.4-44.el7.x86_64.rpm SHA-256: 715493f4eb69c6d56f78a51735394a5dd2891da6800c3e4c5b44c654348f6f70
python-gluster-3.8.4-44.el7.noarch.rpm SHA-256: 2df09e01c12dd5775020d2fd0c615d637f500dabc617cbdd2761f8d3c669bb0a

Red Hat Virtualization 4 for RHEL 7

SRPM
glusterfs-3.8.4-44.el7.src.rpm SHA-256: ac9d971c82684060acc1ebaaf5f9efbf1dc5fe86442643fcb87db8de1236ac9a
x86_64
glusterfs-3.8.4-44.el7.x86_64.rpm SHA-256: 43ce9cfdd8e0d71ff4fa3f59f752c2cdbb52f251d7bb0364e041ea32e588ad8f
glusterfs-api-3.8.4-44.el7.x86_64.rpm SHA-256: 5d9cfd2eb8b07137ddcc131b2d82746790e668a7eae85e8edfb0b1e4e0977118
glusterfs-api-devel-3.8.4-44.el7.x86_64.rpm SHA-256: 5b7bb25d4aa3edb8b3a3c55fb7e201232018f854c859253437c98cb0bce16a0d
glusterfs-cli-3.8.4-44.el7.x86_64.rpm SHA-256: 24771ac6e647e81886f5d6e43ac3aa206135ef47cb9880e0163914d3dba32bc8
glusterfs-client-xlators-3.8.4-44.el7.x86_64.rpm SHA-256: 216fa669ce920d8ab7edba5c3639c05f8561c2e0aeb1f0484edc1f9445611f03
glusterfs-debuginfo-3.8.4-44.el7.x86_64.rpm SHA-256: 22b1b036847384ffea131e51406a1ecf9d748c682379a04379d211e78d0b2755
glusterfs-devel-3.8.4-44.el7.x86_64.rpm SHA-256: f84bfbeb6be692f1e3116c33e2012bfdac2b156fe8814e0c556f25bcde4aac4a
glusterfs-fuse-3.8.4-44.el7.x86_64.rpm SHA-256: bbad395c63ef8baf52083009b89fa50432ef6cfa0bb7b6c6ee6be4d00b8e5181
glusterfs-libs-3.8.4-44.el7.x86_64.rpm SHA-256: 6b7dee529de634bebe1027e55f141224371fb55396461814f1af04ccdb611497
glusterfs-rdma-3.8.4-44.el7.x86_64.rpm SHA-256: 715493f4eb69c6d56f78a51735394a5dd2891da6800c3e4c5b44c654348f6f70
python-gluster-3.8.4-44.el7.noarch.rpm SHA-256: 2df09e01c12dd5775020d2fd0c615d637f500dabc617cbdd2761f8d3c669bb0a

Red Hat Gluster Storage Server for On-premise 3 for RHEL 7

SRPM
glusterfs-3.8.4-44.el7rhgs.src.rpm SHA-256: 141a6f431a53b10c0113c2722b74db6ba2303308ceb1e2e11cb898b205c641f3
redhat-release-server-7.4-25.el7rhgs.src.rpm SHA-256: a79ef0e160bead215d679aab23b627cd5dc570e0435fcbb00cdffabb4bdbaf9e
redhat-storage-server-3.3.0.2-1.el7rhgs.src.rpm SHA-256: f31be029f6782776ccb78593303aed6cb68e6d5753c10d0eaf83e5d3cec631a3
x86_64
glusterfs-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: 6ff6fe7b2e49c1d671e9e858040eec3ced3e50b46a6366b5869abb322bc4e0e8
glusterfs-api-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: e2af904c29ffe822cdc495004a752a790182ea16fe3a3fad00e88b925840e789
glusterfs-api-devel-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: 4dfb87c64089f89b3cad51d06f407f361e97025da1dcb53de4fb1675122dc405
glusterfs-cli-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: cc1641161a104e895291544237a16aba56e2d556bc29dc755078ff9ebb63c813
glusterfs-client-xlators-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: eb276a156e12c2bc12413b2723ee3489bc1a530c17e63c0e3c87d1b34d80efaa
glusterfs-debuginfo-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: f1bd4b80a00ff141d7b1e066118fde3436e2e9fcb87888ec67f5bdfc56a36b6b
glusterfs-devel-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: 23a1edba8bb06778b585f3ce545b980893a7b07a16603856f7c6078f272d2889
glusterfs-events-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: 6b9820cf15b5a2a821e199b758f09db1be0b372051367b6d1c7758c523fe35d1
glusterfs-fuse-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: 44ee39864ff9aae7052ad88f755725ae0e00eda90ee35b75081ea1124ece6de6
glusterfs-ganesha-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: f7ac34e37ce273760faeb32bd81fd6f7ef0905f0f4875b920089c9e593f14fe8
glusterfs-geo-replication-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: a7cf8931963ee923bc45a165876cff82cb585f8633575b85dea26d210c85ef07
glusterfs-libs-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: 12b72596ed2ed018207b4d49c0decc5d71dc0272f948a9757363c3cd7f1b0857
glusterfs-rdma-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: cab5860152ac04d42e9324d396a87395b1f8f6421c4167d87c957a8309c5ab52
glusterfs-server-3.8.4-44.el7rhgs.x86_64.rpm SHA-256: 92f3be379a914dc107b4d9c74362c6435c840f4adaf8fa3e53e6fcfccb3a8b26
python-gluster-3.8.4-44.el7rhgs.noarch.rpm SHA-256: 228004eb6406fa50b7f3ecce6799606d6cd5d5efbfc385cfeb7e7968c27cf436
redhat-release-server-7.4-25.el7rhgs.x86_64.rpm SHA-256: 6ca13c8d10dc4838ad767f1f006bb02e6fe90c628f6d70d799abd99aae4e27a0
redhat-storage-server-3.3.0.2-1.el7rhgs.noarch.rpm SHA-256: 9557b0ccfcd52e2dbe893796b79049f2083d90230a798fe4268a2b8817d98b4f

Red Hat Virtualization Host 4 for RHEL 7

SRPM
x86_64
glusterfs-debuginfo-3.8.4-44.el7.x86_64.rpm SHA-256: 22b1b036847384ffea131e51406a1ecf9d748c682379a04379d211e78d0b2755

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.

Red Hat

Quick Links

  • Downloads
  • Subscriptions
  • Support Cases
  • Customer Service
  • Product Documentation

Help

  • Contact Us
  • Customer Portal FAQ
  • Log-in Assistance

Site Info

  • Trust Red Hat
  • Browser Support Policy
  • Accessibility
  • Awards and Recognition
  • Colophon

Related Sites

  • redhat.com
  • developers.redhat.com
  • connect.redhat.com
  • cloud.redhat.com

About

  • Red Hat Subscription Value
  • About Red Hat
  • Red Hat Jobs
Copyright © 2022 Red Hat, Inc.
  • Privacy Statement
  • Customer Portal Terms of Use
  • All Policies and Guidelines
Red Hat Summit
Twitter