- Issued:
- 2019-10-30
- Updated:
- 2019-10-30
RHEA-2019:3249 - Product Enhancement Advisory
Synopsis
glusterfs bug fix and enhancement update
Type/Severity
Product Enhancement Advisory
Red Hat Insights patch analysis
Identify and remediate systems affected by this advisory.
Topic
Updated glusterfs packages that fix several bugs and add various enhancements are now available.
Description
Red Hat Gluster Storage is software only scale-out storage solution that
provides flexible and affordable unstructured data storage. It unifies data
storage and infrastructure, increases performance, and improves
availability and manageability to meet enterprise-level storage challenges.
The glusterfs packages have been rebased to upstream version 6.
(BZ#1699719)
This advisory fixes the following bugs:
- O_TRUNC is ignored during open-fd heal to prevent invalid locks.(BZ#1706549)
- Reading from bad blocks is now prevented. (BZ#1732774)
- File descriptors are marked as bad when updates to file size or version
fails. (BZ#1745107)
- Stale linkto files are identified and deleted. (BZ#1672869)
- Network family sets correctly during socket initialization so events are
sent to consumers. (BZ#1732443)
- Dynamically allocated memory is freed correctly. (BZ#1734423, BZ#1736830)
- Non-root geo-replication sessions can now use gluster commands by setting
gluster-command-dir and gluster-command-slave-dir options. (BZ#1712591)
- New auto-invalidation and performance.global-cache-invalidation options
retain page cache content to improve performance. (BZ#1676468)
- Geo-replication now succeeds when a symbolic link is renamed multiple
times between syncs. (BZ#1670429)
- During geo-replication, workers now read stderr output while tarssh runs,
avoiding deadlocks. (BZ#1708116)
- Geo-replication no longer creates extra files when many different
files are renamed to the same destination path. (BZ#1708121)
- Fixed memory leak when viewing status of all volumes. (BZ#1670415,
BZ#1686255)
- Rebalance socket files are now named using a hash based on volume name
and UUID to avoid character limits and ensure rebalance occurs.
(BZ#1720192)
- Get-status now reports state more accurately. (BZ#1726991)
- Optimized retrieving volume information to prevent handshake timeouts
when 1500+ volumes are configured in a cluster. (BZ#1652461)
- Access Control List settings are correctly removed from volumes.
(BZ#1685246)
- When eager-lock lock acquisition failed during a write transaction, the
previous lock was retained, which blocked all subsequent writes and caused
a hang. This is now handled correctly and more specific log messages have
been added to assist in diagnosing related issues. (BZ#1688395)
- The cluster.quorum-count volume option no longer receives a combination
of new and stale data in some situations, and Gluster NFS clients now honor
cluster.quorum-count when cluster.quorum-type is set to fixed. (BZ#1642425)
- Shard deletion is now a batched background process to control .shard
directory contention. Batch size is 100 by default and can be set using
features.shard-deletion-rate. (BZ#1568758)
This advisory also provides the following enhancements:
- Gluster-based time attributes are now available to avoid consistency
issues with kernel-based time attributes. (BZ#1583225, BZ#1699709,
BZ#1298724, BZ#1314508)
- The storage.fips-mode-rchecksum volume option is now enabled by default
for new volumes on clusters with an op-version of 70000 or higher.
(BZ#1706683)
- The default maximum port number for bricks is now 60999 instead of 65535.
(BZ#1658448)
- Override umask by using the following new options: storage.create-mask,
storage.create-directory-mask, storage.force-create-mode, and
storage.force-create-directory. (BZ#1539679)
- A Certificate Revocation List (CRL) can now be set using the ssl.crl-path
volume option. (BZ#1583585)
- Bricks in different subvolumes can now be different sizes, and gluster
algorithms account for this when determining placement ranges for files.
(BZ#1290124)
- Users can set a different gluster statedump path for client gfapi
processes that cannot write to /var/run/gluster. (BZ#1720461)
- Improved performance when syncing renames. (BZ#1726000)
- The storage.reserve option now reserves based on size or percentage.
(BZ#1573077)
All users are advised to upgrade to these updated packages to receive these
fixes and enhancements.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat Enterprise Linux Server 7 x86_64
- Red Hat Virtualization 4 for RHEL 7 x86_64
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64
Fixes
- BZ - 1214489 - performance.read-ahead causes huge increase in unnecessary network traffic
- BZ - 1277328 - glusterfsd uses a lot of CPU when performance.readdir-ahead is enabled (default)
- BZ - 1403459 - [OpenSSL] : auth.ssl-allow has no option description.
- BZ - 1403530 - [OpenSSL] : Retrieving the value of "client.ssl" option,before SSL is set up, fails .
- BZ - 1475133 - Files are not rebalanced if destination brick(available size) is of smaller size than source brick(available size)
- BZ - 1477786 - [GSS]More than 2 smbd processes on a gluster node make application performance dramatically slow down
- BZ - 1480091 - Seeing option "not recognized" warnings when brick graph gets loaded
- BZ - 1480907 - Geo-rep help looks to have a typo.
- BZ - 1493284 - glusterfs-devel RPM install should not require gluster processes to be shut down
- BZ - 1497139 - gfapi: APIs to register/unregister for upcall events
- BZ - 1501888 - [BitRot] man page of gluster needs to be updated for scrub-frequency
- BZ - 1529501 - [shd] : shd occupies ~7.8G in memory while toggling cluster.self-heal-daemon in loop , possibly leaky.
- BZ - 1568758 - Block delete times out for blocks created of very large size
- BZ - 1572163 - atime/mtime is not restored after healing for entry self heals
- BZ - 1573077 - [RFE] storage.reserve option should take size of disk as input instead of percentage
- BZ - 1578703 - gluster volume status inode getting timed out after 30 minutes with no output/error
- BZ - 1582394 - [geo-rep]: False error message in geo-rep logs (rsnapshot usecase)
- BZ - 1583225 - [GSS] ctime sync issues with Solr
- BZ - 1583585 - [RFE] Revoke access from nodes using Certificate Revoke List in SSL
- BZ - 1589359 - [CNS]gluster-file: tar: file changed as we read it
- BZ - 1599587 - [geo-rep]: Worker still ACTIVE after killing bricks on brick-mux enabled setup
- BZ - 1600918 - [Disperse] : Client side heal is not removing dirty flag for some of the files.
- BZ - 1622957 - [geo-rep]: gfid-conflict-resolution config option is accepting any value instead of only boolean
- BZ - 1623420 - [RFE] Change requirement to be chronyd instead of ntp
- BZ - 1640003 - improper checking to avoid identical mounts
- BZ - 1642425 - gluster-NFS is not honoring the quorum count
- BZ - 1652461 - With 1800+ vol and simultaneous 2 gluster pod restarts, running gluster commands gives issues once all pods are up
- BZ - 1659487 - [GSS][RHEV: The mount logs of 'vmstore' volume are consuming all the space under /var/log]
- BZ - 1668001 - Image size as reported from the fuse mount is incorrect
- BZ - 1670415 - Incremental memory leak of glusterd process in 'gluster volume status all detail
- BZ - 1671862 - Geo-rep setup creates an incorrectly formatted authorized_keys file
- BZ - 1676468 - glusterfs-fuse client not benefiting from page cache on read after write
- BZ - 1676495 - Excessive AFR messages from gluster showing in RHGSWA.
- BZ - 1686255 - glusterd leaking memory when issued gluster vol status all tasks continuosly
- BZ - 1687641 - Brick process has coredumped, when starting glusterd
- BZ - 1688231 - geo-rep session creation fails with IPV6
- BZ - 1691224 - [GSS]merge ctime upstream fix into the existing RHGS version
- BZ - 1693933 - Qualify EC volumes with brick multiplexing over fuse access protocol
- BZ - 1694595 - gluster fuse mount crashed, when deleting 2T image file from RHV Manager UI
- BZ - 1695057 - Warning message - "fdctx not valid [Invalid argument]" after gluster node reboot
- BZ - 1695081 - Log level changes do not take effect until the process is restarted
- BZ - 1696334 - Improve gluster-cli error message when node detach fails due to existing bricks
- BZ - 1697790 - Volume cannot get exported for GNFS
- BZ - 1697820 - rhgs 3.5 server not compatible with 3.4 client
- BZ - 1698435 - glusterfs-libs: usage of inet_addr() may impact IPv6
- BZ - 1698436 - build: missing dependencies in spec file
- BZ - 1698919 - Brick is not able to detach successfully in brick_mux environment
- BZ - 1699271 - [geo-rep]: Geo-rep FAULTY in RHGS 3.5
- BZ - 1699719 - [RHEL7] rebase RHGS 3.5.0 to upstream glusterfs-6
- BZ - 1699835 - Glusterd did not start by default after node reboot
- BZ - 1701811 - ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open
- BZ - 1702298 - Custom xattrs are not healed on newly added brick
- BZ - 1703423 - Multiple disconnect events being propagated for the same child
- BZ - 1703455 - disperse.other-eager-lock is enabled by default
- BZ - 1703753 - portmap entries missing in glusterd statedumps
- BZ - 1704181 - Brick logs inundated with [2019-04-27 22:14:53.378047] I [dict.c:541:dict_get] (-->/usr/lib64/glusterfs/6.0/xlator/features/worm.so(+0x7241) [0x7fe857bb3241] -->/usr/lib64/glusterfs/6.0/xlator/features/locks.so(+0x1c219) [0x7fe857dda219] [Invalid argumen
- BZ - 1704769 - Creation of bulkvoldict thread logic is not correct while brick_mux is enabled for single volume
- BZ - 1704851 - Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup
- BZ - 1705018 - [GSS] vsftpd process getting blocked for more than 120 seconds while closing FD, crashing in __close_fd syscall
- BZ - 1706776 - maintain consistent values across for options when fetched at cluster level or volume level
- BZ - 1707246 - [glusterd]: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded
- BZ - 1708043 - [geo-rep]: Non-root - Unable to set up mountbroker root directory and group
- BZ - 1708180 - gluster-block: improvements to volume group profile options list
- BZ - 1708183 - Remove-brick shows warning cluster.force-migration enabled where as cluster.force-migration is disabled on the volume
- BZ - 1709087 - Capture memory consumption for gluster process at the time of throwing no memory available message
- BZ - 1709301 - ctime changes: tar still complains file changed as we read it if uss is enabled
- BZ - 1710233 - [RHEL7]update redhat-storage-server build for RHGS 3.5.0
- BZ - 1710701 - AFR-v2 does not log before attempting data self-heal
- BZ - 1711130 - [shd+geo-rep]: shd not coming up on 2 nodes
- BZ - 1711249 - bulkvoldict thread is not handling all volumes while brick multiplex is enabled
- BZ - 1711296 - Optimize glusterd code to copy dictionary in handshake code path
- BZ - 1712149 - update with entitlement certificate for RHEL 7.7
- BZ - 1712151 - update with entitlement certificate for RHGS-3.5.0
- BZ - 1712154 - Update redhat-storage-server version details for RHGS-3.5.0
- BZ - 1713664 - Healing not proceeding during in-service upgrade on a disperse volume
- BZ - 1713890 - [Perf] Rename regressing by 10% in replica3 volume over fuse
- BZ - 1714078 - Update the dependency for nfs-ganesha-gluster package
- BZ - 1715407 - gluster v get <VolumeName> all still showing storage.fips-mode-rchecksum off
- BZ - 1715438 - directories going into split-brain
- BZ - 1715447 - Files in entry split-brain with "type mismatch"
- BZ - 1716385 - [EC] volume heal info command stucks forever even if all bricks are running
- BZ - 1716792 - heal launch failing with "Glusterd Syncop Mgmt brick op 'Heal' failed"
- BZ - 1716821 - DHT: directory permissions are wiped out
- BZ - 1717784 - Ganesha-gfapi logs are flooded with error messages related to "gf_uuid_is_null(gfid)) [Invalid argument]" when lookups are running from multiple clients
- BZ - 1717927 - [RHEL-8] Provide RHGS-3.5.0 Client Build
- BZ - 1719640 - glusterfind command failing with error "%{__python3}: bad interpreter:"
- BZ - 1720079 - [RHEL-8.1] yum update fails for rhel-8 glusterfs client packages 6.0-5.el8
- BZ - 1720163 - [Ganesha] Posix compliance truncate/00.t and chown/00.t tests are failing on 3.5.0 build
- BZ - 1720192 - [GSS]Can't rebalance GlusterFS volume because unix socket's path name is too long
- BZ - 1720461 - gfapi: provide an option for changing statedump path in glfs-api.
- BZ - 1720992 - [geo-rep]: gluster command not found while setting up a non-root session
- BZ - 1721028 - gluster v heal <vol_name> info is stuck
- BZ - 1721357 - DHT: Internal xattrs visible on the mount
- BZ - 1721477 - posix: crash in posix_cs_set_state on fallocate
- BZ - 1722131 - [In-service] Post upgrade glusterd is crashing with a backtrace on the upgraded node while issuing gluster volume status from non-upgraded nodes
- BZ - 1722209 - [GSS] Issues accessing gluster mount / df -h hanging
- BZ - 1722512 - DHT: severe memory leak in dht rename
- BZ - 1722801 - Incorrect power of two calculation in mem_pool_get_fn
- BZ - 1722829 - glusterd crashed while regaining quorum for the volume
- BZ - 1724885 - [RHHI-V] glusterd crashes after upgrade and unable to start it again
- BZ - 1725552 - Auto rotate shd log file
- BZ - 1726000 - geo-rep: performance improvement while syncing heavy renames with existing destination
- BZ - 1726991 - get-state does not show correct brick status
- BZ - 1727785 - gluster v geo-rep status command timing out
- BZ - 1728673 - Cannot see the "trusted.glusterfs.mdata" xattr for directory on a new brick after rebalance
- BZ - 1729108 - Memory leak in glusterfsd process
- BZ - 1729971 - core file generated - when EC volume stop and start is executed for 10 loops on a EC+Brickmux setup
- BZ - 1730914 - [GSS] Sometimes truncate and discard could cause data corruption when executed while self-heal is running
- BZ - 1731448 - [GSS] An Input/Output error happens on a disperse volume when doing unaligned writes to a sparse file
- BZ - 1731826 - bricks gone down unexpectedly
- BZ - 1731896 - fuse client hung when issued a lookup "ls" on an ec volume
- BZ - 1732443 - seeing error message in glustershd.log on volume start(or may be as part of shd graph regeneration) inet_pton failed with return code 0 [Invalid argument]
- BZ - 1732770 - fix truncate lock to cover the write in tuncate clean
- BZ - 1732774 - Disperse volume : data corruption with ftruncate data in 4+2 config
- BZ - 1732792 - Disperse volume : data corruption with ftruncate data in 4+2 config
- BZ - 1732793 - I/O error on writes to a disperse volume when replace-brick is executed
- BZ - 1733520 - potential deadlock while processing callbacks in gfapi
- BZ - 1733531 - Heal not completing after geo-rep session is stopped on EC volumes.
- BZ - 1733970 - Glusterfind pre command fails after files are modified from mount point
- BZ - 1734305 - ctime: When healing ctime xattr for legacy files, if multiple clients access and modify the same file, the ctime might be updated incorrectly.
- BZ - 1734423 - interrupts leak memory
- BZ - 1734534 - Upgrading a RHGS node fails when user edited glusterd.vol file exists
- BZ - 1734734 - Unable to create geo-rep session on a non-root setup.
- BZ - 1735514 - Open fd heal should filter O_APPEND/O_EXCL
- BZ - 1736830 - issuing an interrupt(ctrl+c) post lookup hung issue(BZ#1731896 ) is leaking memory and finally fuse mount gets OOM killed
- BZ - 1737705 - ctime: nfs client gets bad ctime for copied file which is on glusterfs disperse volume with ctime on
- BZ - 1743611 - RHEL 6 GlusterFS client creates files with time 01/01/1970
- BZ - 1743627 - ctime: If atime is updated via utimensat syscall ctime is not getting updated
- BZ - 1743634 - geo-rep: Changelog archive file format is incorrect
- BZ - 1744518 - log aio_error return codes in posix_fs_health_check
- BZ - 1746027 - systemctl start glusterd is getting timed out on the scaled setup with 2000 volumes
- BZ - 1748688 - [RFE]: Propagate error back to the application if a write fails on any brick in an erasure coded volume
- BZ - 1750241 - The result (hostname) of getnameinfo for all bricks (ipv6 addresses) are the same, while they are not.
- BZ - 1752713 - heketidbstorage bricks go down during PVC creation
- BZ - 1754407 - geo-rep: non-root session going fault due improper sub-command
- BZ - 1754790 - glustershd.log getting flooded with "W [inode.c:1017:inode_find] (-->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe3f9) [0x7fd09b0543f9] -->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe19c) [0x7fd09b05419 TABLE NOT FOUND"
- BZ - 1755227 - Remove-brick operation displays warning message related to cluster.force-migration
- BZ - 1756325 - Rebalance is causing glusterfs crash on client node
- BZ - 1757420 - [GSS] memory leak in glusterfsd with error from iot_workers_scale function
- BZ - 1758432 - Rebalance causing IO Error - File descriptor in bad state
- BZ - 1758618 - # gluster v info --xml is always returning <distCount>3</distCount> for all Nx3 volumes
- BZ - 1760261 - rebalance start is succeeding when quorum is not met
- BZ - 1760939 - [geo-rep] sync_method showing rsync instead of tarssh post in-service upgrade from 3.4.4 to 3.5.0
- BZ - 1763412 - [geo-rep] Workers crashing for non-root geo-rep sessions with OSError: [Errno 13] Permission denied when mkdir and mv is executed
- BZ - 1764202 - cgroup control-cpu-load.sh script not working
- BZ - 1765555 - peers went into rejected state, after upgrade from 3.3.1 to 3.5.0
CVEs
References
(none)
Red Hat Enterprise Linux Server 7
SRPM | |
---|---|
glusterfs-6.0-21.el7.src.rpm | SHA-256: 64f604cd985b636b396dd800e7a1036983da3c10bb6e83a4a9125ecbd6f9bf31 |
x86_64 | |
glusterfs-6.0-21.el7.x86_64.rpm | SHA-256: abcaf7995f0b3b31ce611208b8398109c8e407f02f6396673b74e4e009acd682 |
glusterfs-api-6.0-21.el7.x86_64.rpm | SHA-256: 18975a572dcb2e62ece588bde84dab045a5a18104dbbd6897532e11223428194 |
glusterfs-api-devel-6.0-21.el7.x86_64.rpm | SHA-256: 43e144915e7a490c99a83d871d3b6fdb34d4b68cc8596634f608cec4c3d36457 |
glusterfs-cli-6.0-21.el7.x86_64.rpm | SHA-256: b60fb967091887d795d25e8d7836f53f05a7b3e3303fa8c739604e6bf9375e44 |
glusterfs-client-xlators-6.0-21.el7.x86_64.rpm | SHA-256: 647879020bb3b79663ef3b3728316c15b7186d127382f52e3cd892023539814e |
glusterfs-debuginfo-6.0-21.el7.x86_64.rpm | SHA-256: 260aeb6c371d8fdbf79a23d3c55a70f17a5c1fb23b40cf5c99010584f1eddf75 |
glusterfs-devel-6.0-21.el7.x86_64.rpm | SHA-256: 67c92102253e8e928129d6d5132678aad9f7974e8adca064965f55b87c022dac |
glusterfs-fuse-6.0-21.el7.x86_64.rpm | SHA-256: 3c78eb52b06dbf69027156a6f07e0d23c5cf210f8d55f996579995eb66bcbf67 |
glusterfs-libs-6.0-21.el7.x86_64.rpm | SHA-256: 438eb3cfc6d11f88da233d54ea39060029f0a20d3215f281d2289d74b6b5a6b9 |
glusterfs-rdma-6.0-21.el7.x86_64.rpm | SHA-256: 0b0d1674f7727798694cfbf1c73eae679b541cf7d0fef595549dd38f468af1da |
python2-gluster-6.0-21.el7.x86_64.rpm | SHA-256: 44cc539da418135b863279eed6ab2cb43dc648a5ecd88ef614fb7bf536777db4 |
Red Hat Virtualization 4 for RHEL 7
SRPM | |
---|---|
glusterfs-6.0-21.el7.src.rpm | SHA-256: 64f604cd985b636b396dd800e7a1036983da3c10bb6e83a4a9125ecbd6f9bf31 |
x86_64 | |
glusterfs-6.0-21.el7.x86_64.rpm | SHA-256: abcaf7995f0b3b31ce611208b8398109c8e407f02f6396673b74e4e009acd682 |
glusterfs-api-6.0-21.el7.x86_64.rpm | SHA-256: 18975a572dcb2e62ece588bde84dab045a5a18104dbbd6897532e11223428194 |
glusterfs-api-devel-6.0-21.el7.x86_64.rpm | SHA-256: 43e144915e7a490c99a83d871d3b6fdb34d4b68cc8596634f608cec4c3d36457 |
glusterfs-cli-6.0-21.el7.x86_64.rpm | SHA-256: b60fb967091887d795d25e8d7836f53f05a7b3e3303fa8c739604e6bf9375e44 |
glusterfs-client-xlators-6.0-21.el7.x86_64.rpm | SHA-256: 647879020bb3b79663ef3b3728316c15b7186d127382f52e3cd892023539814e |
glusterfs-debuginfo-6.0-21.el7.x86_64.rpm | SHA-256: 260aeb6c371d8fdbf79a23d3c55a70f17a5c1fb23b40cf5c99010584f1eddf75 |
glusterfs-devel-6.0-21.el7.x86_64.rpm | SHA-256: 67c92102253e8e928129d6d5132678aad9f7974e8adca064965f55b87c022dac |
glusterfs-fuse-6.0-21.el7.x86_64.rpm | SHA-256: 3c78eb52b06dbf69027156a6f07e0d23c5cf210f8d55f996579995eb66bcbf67 |
glusterfs-libs-6.0-21.el7.x86_64.rpm | SHA-256: 438eb3cfc6d11f88da233d54ea39060029f0a20d3215f281d2289d74b6b5a6b9 |
glusterfs-rdma-6.0-21.el7.x86_64.rpm | SHA-256: 0b0d1674f7727798694cfbf1c73eae679b541cf7d0fef595549dd38f468af1da |
Red Hat Gluster Storage Server for On-premise 3 for RHEL 7
SRPM | |
---|---|
glusterfs-6.0-21.el7rhgs.src.rpm | SHA-256: 921ddae9d2e80a224b94daa6ddd43689761c2cb6147cab7e55b6e274cb84f36a |
redhat-release-server-7.7-16.el7rhgs.src.rpm | SHA-256: fe5f3b5ade0ca4c3e2e25b905f23c01400776d16a90afa832ce4b3202aa8135f |
redhat-storage-logos-70.7.0-3.el7rhgs.src.rpm | SHA-256: 46bd58c96ff0ab867e9b66d282345026455a47cef1cd01b16d2df343368dd8ea |
redhat-storage-server-3.5.0.0-1.el7rhgs.src.rpm | SHA-256: 9b5986ea826941ecdec69df0ba75a31b5eef730ae39180cc23c5ef0bd99be416 |
x86_64 | |
glusterfs-6.0-21.el7rhgs.x86_64.rpm | SHA-256: f46fef7cfe9fd2cee37f8229c5b8144c842e16bb7026838b0e7cd9d3f596dabf |
glusterfs-api-6.0-21.el7rhgs.x86_64.rpm | SHA-256: 52c34df4fb2c20573136adeb9d055de1de5057e7469e202566a402dd1a4bcace |
glusterfs-api-devel-6.0-21.el7rhgs.x86_64.rpm | SHA-256: b2b2d8dc08ee6c27afa5ae1b5e87413d569885599db85f164139931e162d3af3 |
glusterfs-cli-6.0-21.el7rhgs.x86_64.rpm | SHA-256: b1625abf96f406e5d844c3f521cf9e498b71ac92fa2fa961b08ec336fe5de552 |
glusterfs-client-xlators-6.0-21.el7rhgs.x86_64.rpm | SHA-256: bb5e8b1ee42a1996796ead1168cc3e28e27fd29475ba545a79e7774ab318ee4d |
glusterfs-debuginfo-6.0-21.el7rhgs.x86_64.rpm | SHA-256: bc9dbc3e11d29c48135f974a0459142b3be91c1abbf24920bcf1cb4e75f06d62 |
glusterfs-devel-6.0-21.el7rhgs.x86_64.rpm | SHA-256: f86a472e0eda15c0ab1448258e87ceceb00b2778fb039829a9e7743785b5214b |
glusterfs-events-6.0-21.el7rhgs.x86_64.rpm | SHA-256: 32b48edd26e2320b4af644b76b34264b825548cd2e5f6c191ca3fbc47e258d9d |
glusterfs-fuse-6.0-21.el7rhgs.x86_64.rpm | SHA-256: 73f604d189bfb02dd33b6ad53c2ebcb79e1d84a6ad0c7c6da8cd197f24573993 |
glusterfs-ganesha-6.0-21.el7rhgs.x86_64.rpm | SHA-256: c3d499f9f147ff497e76e6f191fea07d59676a913a815c83afedc2ddd7d3134d |
glusterfs-geo-replication-6.0-21.el7rhgs.x86_64.rpm | SHA-256: b43d8dcb02520f520cfe30e13ad2f052ed059e319391489cb3e46fc6e7a89b43 |
glusterfs-libs-6.0-21.el7rhgs.x86_64.rpm | SHA-256: 0d51ab3bd83f2885b31655b2872ac6d5bdd28fb817fc06a2feb063f145006798 |
glusterfs-rdma-6.0-21.el7rhgs.x86_64.rpm | SHA-256: c861728dbf57235b1ea10df7dcb20485f6559c85c2a0a221b727e9c4ad3078a8 |
glusterfs-resource-agents-6.0-21.el7rhgs.noarch.rpm | SHA-256: d978b8322154e05e47b0c8459e38629007a910617427e0b083ece9cbea70ec06 |
glusterfs-server-6.0-21.el7rhgs.x86_64.rpm | SHA-256: d440fd76b9db055b34f6ea1fe216bc7bbab1fd56d8382b74cd201c166e93e1d1 |
python2-gluster-6.0-21.el7rhgs.x86_64.rpm | SHA-256: 94eab04601d1d4f269948891c9b9376eac45a1a05a95c02e2168136587d47cca |
redhat-release-server-7.7-16.el7rhgs.x86_64.rpm | SHA-256: 5dce77f9f732bb4a2ee84ccc168cc0715b676a4f80f7c5d31cc3ed37099928d3 |
redhat-storage-logos-70.7.0-3.el7rhgs.noarch.rpm | SHA-256: eccb8b9afcdb82d055433fa04f502f09735aa3f5c9e35bd56225c3092259e788 |
redhat-storage-server-3.5.0.0-1.el7rhgs.noarch.rpm | SHA-256: c0ce315c95110122fe550f7863e3f9561c6373b21def34b1fa564a8912452455 |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.