Red Hat Customer Portal

Skip to main content

Main Navigation

  • Products & Services
    • Back
    • View All Products
    • Infrastructure and Management
      • Back
      • Red Hat Enterprise Linux
      • Red Hat Virtualization
      • Red Hat Identity Management
      • Red Hat Directory Server
      • Red Hat Certificate System
      • Red Hat Satellite
      • Red Hat Subscription Management
      • Red Hat Update Infrastructure
      • Red Hat Insights
      • Red Hat Ansible Automation Platform
    • Cloud Computing
      • Back
      • Red Hat CloudForms
      • Red Hat OpenStack Platform
      • Red Hat OpenShift Container Platform
      • Red Hat OpenShift Online
      • Red Hat OpenShift Dedicated
      • Red Hat Advanced Cluster Management for Kubernetes
      • Red Hat Quay
      • Red Hat CodeReady Workspaces
    • Storage
      • Back
      • Red Hat Gluster Storage
      • Red Hat Hyperconverged Infrastructure
      • Red Hat Ceph Storage
      • Red Hat Openshift Container Storage
    • Runtimes
      • Back
      • Red Hat Runtimes
      • Red Hat JBoss Enterprise Application Platform
      • Red Hat Data Grid
      • Red Hat JBoss Web Server
      • Red Hat Single Sign On
      • Red Hat support for Spring Boot
      • Red Hat build of Node.js
      • Red Hat build of Thorntail
      • Red Hat build of Eclipse Vert.x
      • Red Hat build of OpenJDK
      • Red Hat build of Quarkus
      • Red Hat CodeReady Studio
    • Integration and Automation
      • Back
      • Red Hat Integration
      • Red Hat Fuse
      • Red Hat AMQ
      • Red Hat 3scale API Management
      • Red Hat JBoss Data Virtualization
      • Red Hat Process Automation
      • Red Hat Process Automation Manager
      • Red Hat Decision Manager
    • Support
    • Production Support
    • Development Support
    • Product Life Cycles
    • Documentation
    • Red Hat Enterprise Linux
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenStack Platform
    • Red Hat OpenShift Container Platform
    • Services
    • Consulting
    • Technical Account Management
    • Training & Certifications
    • Ecosystem Catalog
    • Partner Resources
    • Red Hat in the Public Cloud
  • Tools
    • Back
    • Red Hat Insights
    • Tools
    • Solution Engine
    • Packages
    • Errata
    • Customer Portal Labs
    • Explore Labs
    • Configuration
    • Deployment
    • Security
    • Troubleshooting
  • Security
    • Back
    • Product Security Center
    • Security Updates
    • Security Advisories
    • Red Hat CVE Database
    • Security Labs
    • Resources
    • Overview
    • Security Blog
    • Security Measurement
    • Severity Ratings
    • Backporting Policies
    • Product Signing (GPG) Keys
  • Community
    • Back
    • Customer Portal Community
    • Discussions
    • Blogs
    • Private Groups
    • Community Activity
    • Customer Events
    • Red Hat Convergence
    • Red Hat Summit
    • Stories
    • Red Hat Subscription Value
    • You Asked. We Acted.
    • Open Source Communities
  • Subscriptions
  • Downloads
  • Containers
  • Support Cases
  • Account
    • Back
    • Log In
    • Register
    • Red Hat Account Number:
    • Account Details
    • User Management
    • Account Maintenance
    • My Profile
    • Notifications
    • Help
    • Log Out
  • Language
    • Back
    • English
    • 한국어
    • 日本語
    • 中文 (中国)
Red Hat Customer Portal
  • Products & Services
    • Back
    • View All Products
    • Infrastructure and Management
      • Back
      • Red Hat Enterprise Linux
      • Red Hat Virtualization
      • Red Hat Identity Management
      • Red Hat Directory Server
      • Red Hat Certificate System
      • Red Hat Satellite
      • Red Hat Subscription Management
      • Red Hat Update Infrastructure
      • Red Hat Insights
      • Red Hat Ansible Automation Platform
    • Cloud Computing
      • Back
      • Red Hat CloudForms
      • Red Hat OpenStack Platform
      • Red Hat OpenShift Container Platform
      • Red Hat OpenShift Online
      • Red Hat OpenShift Dedicated
      • Red Hat Advanced Cluster Management for Kubernetes
      • Red Hat Quay
      • Red Hat CodeReady Workspaces
    • Storage
      • Back
      • Red Hat Gluster Storage
      • Red Hat Hyperconverged Infrastructure
      • Red Hat Ceph Storage
      • Red Hat Openshift Container Storage
    • Runtimes
      • Back
      • Red Hat Runtimes
      • Red Hat JBoss Enterprise Application Platform
      • Red Hat Data Grid
      • Red Hat JBoss Web Server
      • Red Hat Single Sign On
      • Red Hat support for Spring Boot
      • Red Hat build of Node.js
      • Red Hat build of Thorntail
      • Red Hat build of Eclipse Vert.x
      • Red Hat build of OpenJDK
      • Red Hat build of Quarkus
      • Red Hat CodeReady Studio
    • Integration and Automation
      • Back
      • Red Hat Integration
      • Red Hat Fuse
      • Red Hat AMQ
      • Red Hat 3scale API Management
      • Red Hat JBoss Data Virtualization
      • Red Hat Process Automation
      • Red Hat Process Automation Manager
      • Red Hat Decision Manager
    • Support
    • Production Support
    • Development Support
    • Product Life Cycles
    • Documentation
    • Red Hat Enterprise Linux
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenStack Platform
    • Red Hat OpenShift Container Platform
    • Services
    • Consulting
    • Technical Account Management
    • Training & Certifications
    • Ecosystem Catalog
    • Partner Resources
    • Red Hat in the Public Cloud
  • Tools
    • Back
    • Red Hat Insights
    • Tools
    • Solution Engine
    • Packages
    • Errata
    • Customer Portal Labs
    • Explore Labs
    • Configuration
    • Deployment
    • Security
    • Troubleshooting
  • Security
    • Back
    • Product Security Center
    • Security Updates
    • Security Advisories
    • Red Hat CVE Database
    • Security Labs
    • Resources
    • Overview
    • Security Blog
    • Security Measurement
    • Severity Ratings
    • Backporting Policies
    • Product Signing (GPG) Keys
  • Community
    • Back
    • Customer Portal Community
    • Discussions
    • Blogs
    • Private Groups
    • Community Activity
    • Customer Events
    • Red Hat Convergence
    • Red Hat Summit
    • Stories
    • Red Hat Subscription Value
    • You Asked. We Acted.
    • Open Source Communities
  • Subscriptions
  • Downloads
  • Containers
  • Support Cases
  • Account
    • Back
    • Log In
    • Register
    • Red Hat Account Number:
    • Account Details
    • User Management
    • Account Maintenance
    • My Profile
    • Notifications
    • Help
    • Log Out
  • Language
    • Back
    • English
    • 한국어
    • 日本語
    • 中文 (中国)
  • Subscriptions
  • Downloads
  • Containers
  • Support Cases
  • Search
  • Log In
  • Language
Or troubleshoot an issue.

Log in to Your Red Hat Account

Log In

Your Red Hat account gives you access to your profile, preferences, and services, depending on your status.

Register

If you are a new customer, register now for access to product evaluations and purchasing capabilities.

Need access to an account?

If your company has an existing Red Hat account, your organization administrator can grant you access.

If you have any questions, please contact customer service.

Red Hat Account Number:

Red Hat Account

  • Account Details
  • User Management
  • Account Maintenance
  • Account Team

Customer Portal

  • My Profile
  • Notifications
  • Help

For your security, if you’re on a public computer and have finished using your Red Hat services, please be sure to log out.

Log Out

Select Your Language

  • English
  • 한국어
  • 日本語
  • 中文 (中国)
Red Hat Customer Portal Red Hat Customer Portal
  • Products & Services
  • Tools
  • Security
  • Community
  • Infrastructure and Management

  • Cloud Computing

  • Storage

  • Runtimes

  • Integration and Automation

  • Red Hat Enterprise Linux
  • Red Hat Virtualization
  • Red Hat Identity Management
  • Red Hat Directory Server
  • Red Hat Certificate System
  • Red Hat Satellite
  • Red Hat Subscription Management
  • Red Hat Update Infrastructure
  • Red Hat Insights
  • Red Hat Ansible Automation Platform
  • Red Hat CloudForms
  • Red Hat OpenStack Platform
  • Red Hat OpenShift Container Platform
  • Red Hat OpenShift Online
  • Red Hat OpenShift Dedicated
  • Red Hat Advanced Cluster Management for Kubernetes
  • Red Hat Quay
  • Red Hat CodeReady Workspaces
  • Red Hat OpenShift Service on AWS
  • Red Hat Gluster Storage
  • Red Hat Hyperconverged Infrastructure
  • Red Hat Ceph Storage
  • Red Hat Openshift Container Storage
  • Red Hat Runtimes
  • Red Hat JBoss Enterprise Application Platform
  • Red Hat Data Grid
  • Red Hat JBoss Web Server
  • Red Hat Single Sign On
  • Red Hat support for Spring Boot
  • Red Hat build of Node.js
  • Red Hat build of Thorntail
  • Red Hat build of Eclipse Vert.x
  • Red Hat build of OpenJDK
  • Red Hat build of Quarkus
  • Red Hat CodeReady Studio
  • Red Hat Integration
  • Red Hat Fuse
  • Red Hat AMQ
  • Red Hat 3scale API Management
  • Red Hat JBoss Data Virtualization
  • Red Hat Process Automation
  • Red Hat Process Automation Manager
  • Red Hat Decision Manager
View All Products
  • Support
  • Production Support
  • Development Support
  • Product Life Cycles

Services

  • Consulting
  • Technical Account Management
  • Training & Certifications
  • Documentation
  • Red Hat Enterprise Linux
  • Red Hat JBoss Enterprise Application Platform
  • Red Hat OpenStack Platform
  • Red Hat OpenShift Container Platform
  • Ecosystem Catalog
  • Red Hat in the Public Cloud
  • Partner Resources

Tools

  • Solution Engine
  • Packages
  • Errata
  • Customer Portal Labs
  • Configuration
  • Deployment
  • Security
  • Troubleshooting

Red Hat Insights

Increase visibility into IT operations to detect and resolve technical issues before they impact your business.

  • Learn more
  • Go to Insights

Red Hat Product Security Center

Engage with our Red Hat Product Security team, access security updates, and ensure your environments are not exposed to any known security vulnerabilities.

Product Security Center

Security Updates

  • Security Advisories
  • Red Hat CVE Database
  • Security Labs

Keep your systems secure with Red Hat's specialized responses to security vulnerabilities.

  • View Responses

Resources

  • Overview
  • Security Blog
  • Security Measurement
  • Severity Ratings
  • Backporting Policies
  • Product Signing (GPG) Keys

Customer Portal Community

  • Discussions
  • Blogs
  • Private Groups
  • Community Activity

Customer Events

  • Red Hat Convergence
  • Red Hat Summit

Stories

  • Red Hat Subscription Value
  • You Asked. We Acted.
  • Open Source Communities
Red Hat Product Errata RHSA-2017:0486 - Security Advisory
Issued:
2017-03-23
Updated:
2017-03-23

RHSA-2017:0486 - Security Advisory

  • Overview
  • Updated Packages

Synopsis

Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update

Type/Severity

Security Advisory: Moderate

Topic

An update is now available for Red Hat Gluster Storage 3.2 on Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Gluster Storage is a software only scale-out storage solution that provides flexible and affordable unstructured data storage. It unifies data storage and infrastructure, increases performance, and improves availability and manageability to meet enterprise-level storage challenges.

The following packages have been upgraded to a later upstream version: glusterfs (3.8.4), redhat-storage-server (3.2.0.2), vdsm (4.17.33). (BZ#1362376)

Security Fix(es):

  • It was found that glusterfs-server RPM package would write file with predictable name into world readable /tmp directory. A local attacker could potentially use this flaw to escalate their privileges to root by modifying the shell script during the installation of the glusterfs-server package. (CVE-2015-1795)

This issue was discovered by Florian Weimer of Red Hat Product Security.

Bug Fix(es):

  • Bricks remain stopped if server quorum is no longer met, or if server quorum is disabled, to ensure that bricks in maintenance are not started incorrectly. (BZ#1340995)
  • The metadata cache translator has been updated to improve Red Hat Gluster Storage performance when reading small files. (BZ#1427783)
  • The 'gluster volume add-brick' command is no longer allowed when the replica count has increased and any replica bricks are unavailable. (BZ#1404989)
  • Split-brain resolution commands work regardless of whether client-side heal or the self-heal daemon are enabled. (BZ#1403840)

Enhancement(s):

  • Red Hat Gluster Storage now provides Transport Layer Security support for Samba and NFS-Ganesha. (BZ#1340608, BZ#1371475)
  • A new reset-sync-time option enables resetting the sync time attribute to zero when required. (BZ#1205162)
  • Tiering demotions are now triggered at most 5 seconds after a hi-watermark breach event. Administrators can use the cluster.tier-query-limit volume parameter to specify the number of records extracted from the heat database during demotion. (BZ#1361759)
  • The /var/log/glusterfs/etc-glusterfs-glusterd.vol.log file is now named /var/log/glusterfs/glusterd.log. (BZ#1306120)
  • The 'gluster volume attach-tier/detach-tier' commands are considered deprecated in favor of the new commands, 'gluster volume tier VOLNAME attach/detach'. (BZ#1388464)
  • The HA_VOL_SERVER parameter in the ganesha-ha.conf file is no longer used by Red Hat Gluster Storage. (BZ#1348954)
  • The volfile server role can now be passed to another server when a server is unavailable. (BZ#1351949)
  • Ports can now be reused when they stop being used by another service. (BZ#1263090)
  • The thread pool limit for the rebalance process is now dynamic, and is determined based on the number of available cores. (BZ#1352805)
  • Brick verification at reboot now uses UUID instead of brick path. (BZ#1336267)
  • LOGIN_NAME_MAX is now used as the maximum length for the slave user instead of __POSIX_LOGIN_NAME_MAX, allowing for up to 256 characters including the NULL byte. (BZ#1400365)
  • The client identifier is now included in the log message to make it easier to determine which client failed to connect. (BZ#1333885)

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Enterprise Linux Server 7 x86_64
  • Red Hat Virtualization 4 for RHEL 7 x86_64
  • Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64

Fixes

  • BZ - 1168606 - [USS]: setting the uss option to on fails when volume is in stopped state
  • BZ - 1200927 - CVE-2015-1795 glusterfs: glusterfs-server %pretrans rpm script temporary file issue
  • BZ - 1205162 - [georep]: If a georep session is recreated the existing files which are deleted from slave doesn't get sync again from master
  • BZ - 1211845 - glusterd: response not aligned
  • BZ - 1240333 - [geo-rep]: original directory and renamed directory both at the slave after rename on master
  • BZ - 1241314 - when enable-shared-storage is enabled, volume get still shows that the option is disabled
  • BZ - 1245084 - [RFE] changes needed in snapshot info command's xml output.
  • BZ - 1248998 - [AFR]: Files not available in the mount point after converting Distributed volume type to Replicated one.
  • BZ - 1256483 - Unreleased packages in RHGS 3.1 AMI [RHEL 7]
  • BZ - 1256524 - [RFE] reset brick
  • BZ - 1257182 - Rebalance is not considering the brick sizes while fixing the layout
  • BZ - 1258267 - 1 mkdir generates tons of log messages from dht xlator
  • BZ - 1263090 - glusterd: add brick command should re-use the port for listening which is freed by remove-brick.
  • BZ - 1264310 - DHT: Rebalance hang while migrating the files of disperse volume
  • BZ - 1278336 - nfs client I/O stuck post IP failover
  • BZ - 1278385 - Data Tiering:Detach tier operation should be resilient(continue) when the volume is restarted
  • BZ - 1278394 - gluster volume status xml output of tiered volume has all the common services tagged under <coldBricks>
  • BZ - 1278900 - check_host_list() should be more robust
  • BZ - 1284873 - Poor performance of directory enumerations over SMB
  • BZ - 1286038 - glusterd process crashed while setting the option "cluster.extra-hash-regex"
  • BZ - 1286572 - [FEAT] DHT - rebalance - rebalance status o/p should be different for 'fix-layout' option, it should not show 'Rebalanced-files' , 'Size', 'Scanned' etc as it is not migrating any files.
  • BZ - 1294035 - gluster fails to propagate permissions on the root of a gluster export when adding bricks
  • BZ - 1296796 - [DHT]: Rebalance info for remove brick operation is not showing after glusterd restart
  • BZ - 1298118 - Unable to get the client statedump, as /var/run/gluster directory is not available by default
  • BZ - 1299841 - [tiering]: Files of size greater than that of high watermark level should not be promoted
  • BZ - 1306120 - [GSS] [RFE] Change the glusterd log file name to glusterd.log
  • BZ - 1306656 - [GSS] - Brick ports changed after configuring I/O and management encryption
  • BZ - 1312199 - [RFE] quota: enhance quota enable and disable process
  • BZ - 1315544 - [GSS] -Gluster NFS server crashing in __mnt3svc_umountall
  • BZ - 1317653 - EINVAL errors while aggregating the directory size by quotad
  • BZ - 1318000 - [GSS] - Glusterd not operational due to snapshot conflicting with nfs-ganesha export file in "/var/lib/glusterd/snaps"
  • BZ - 1319078 - files having different Modify and Change date on replicated brick
  • BZ - 1319886 - gluster volume info --xml returns 0 for nonexistent volume
  • BZ - 1324053 - quota/cli: quota list with path not working when limit is not set
  • BZ - 1325821 - gluster snap status xml output shows incorrect details when the snapshots are in deactivated state
  • BZ - 1326066 - [hc][selinux] AVC denial messages seen in audit.log while starting the volume in HCI environment
  • BZ - 1327952 - rotated FUSE mount log is using to populate the information after log rotate.
  • BZ - 1328451 - observing " Too many levels of symbolic links" after adding bricks and then issuing a replace brick
  • BZ - 1332080 - [geo-rep+shard]: Files which were synced to slave before enabling shard doesn't get sync/remove upon modification
  • BZ - 1332133 - glusterd + bitrot : unable to create clone of snapshot. error "xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file:
  • BZ - 1332542 - Tiering related core observed with "uuid_is_null () message".
  • BZ - 1333406 - [HC]: After bringing down and up of the bricks VM's are getting paused
  • BZ - 1333484 - slow readdir performance in SMB clients
  • BZ - 1333749 - glusterd: glusterd provides stale port information when a volume is recreated with same brick path
  • BZ - 1333885 - client ID should logged when SSL connection fails
  • BZ - 1334664 - Excessive errors messages are seen in hot tier's brick logs in a tiered volume
  • BZ - 1334858 - [Perf] : ls-l is not as performant as it used to be on older RHGS builds
  • BZ - 1335029 - set errno in case of inode_link failures
  • BZ - 1336267 - [scale]: Bricks not started after node reboot.
  • BZ - 1336339 - Sequential volume start&stop is failing with SSL enabled setup.
  • BZ - 1336377 - Polling failure errors getting when volume is started&stopped with SSL enabled setup.
  • BZ - 1336764 - Bricks doesn't come online after reboot [ Brick Full ]
  • BZ - 1337391 - [Bitrot] Need a way to set scrub interval to a minute, for ease of testing
  • BZ - 1337444 - [Bitrot]: Scrub status- Certain fields continue to show previous run's details, even if the current run is in progress
  • BZ - 1337450 - [Bitrot+Sharding] Scrub status shows incorrect values for 'files scrubbed' and 'files skipped'
  • BZ - 1337477 - [Volume Scale] Volume start failed with "Error : Request timed out" after successfully creating & starting around 290 gluster volumes using heketi-cli
  • BZ - 1337495 - [Volume Scale] gluster node randomly going to Disconnected state after scaling to more than 290 gluster volumes
  • BZ - 1337565 - "nfs-grace-monitor" timed out messages observed
  • BZ - 1337811 - [GSS] - enabling glusternfs with nfs.rpc-auth-allow to many hosts failed
  • BZ - 1337836 - [Volume Scale] heketi-cli should not attempt to stop and delete a volume as soon as it receives a CLI timeout (120sec) but instead wait until the frame-timeout of 600sec
  • BZ - 1337863 - [SSL] : I/O hangs when run from multiple clients on an SSL enabled volume
  • BZ - 1338615 - [SSL] : gluster v set help does not show ssl options
  • BZ - 1338748 - SAMBA : Error and warning messages related to xlator/features/snapview-client.so adding up to the windows client log on performing IO operations
  • BZ - 1339159 - [geo-rep]: Worker died with [Errno 2] No such file or directory
  • BZ - 1340338 - "volume status inode" command is getting timed out if number of files are more in the mount point
  • BZ - 1340608 - [RFE] : Support SSL enabled volume via SMB v3
  • BZ - 1340756 - [geo-rep]: AttributeError: 'Popen' object has no attribute 'elines'
  • BZ - 1340995 - Bricks are starting when server quorum not met.
  • BZ - 1341934 - [Bitrot]: Recovery fails of a corrupted hardlink (and the corresponding parent file) in a disperse volume
  • BZ - 1342459 - [Bitrot]: Sticky bit files considered and skipped by the scrubber, instead of getting ignored.
  • BZ - 1343178 - [Stress/Scale] : I/O errors out from gNFS mount points during high load on an erasure coded volume,Logs flooded with Error messages.
  • BZ - 1343320 - [GSS] Gluster fuse client crashed generating core dump
  • BZ - 1343695 - [Disperse] : Assertion Failed Error messages in rebalance log post add-brick/rebalance.
  • BZ - 1344322 - [geo-rep]: Worker crashed with OSError: [Errno 9] Bad file descriptor
  • BZ - 1344651 - tiering : Multiple brick processes crashed on tiered volume while taking snapshots
  • BZ - 1344675 - Stale file handle seen on the mount of dist-disperse volume when doing IOs with nfs-ganesha protocol
  • BZ - 1344826 - [geo-rep]: Worker crashed with "KeyError: "
  • BZ - 1344908 - [geo-rep]: If the data is copied from .snaps directory to the master, it doesn't get sync to slave [First Copy]
  • BZ - 1345732 - SAMBA-DHT : Crash seen while rename operations in cifs mount and windows access of share mount
  • BZ - 1347251 - fix the issue of Rolling upgrade or non-disruptive upgrade of disperse or erasure code volume to work
  • BZ - 1347257 - spurious heal info as pending heal entries never end on an EC volume while IOs are going on
  • BZ - 1347625 - [geo-rep] Stopped geo-rep session gets started automatically once all the master nodes are upgraded
  • BZ - 1347922 - nfs-ganesha disable doesn't delete nfs-ganesha folder from /var/run/gluster/shared_storage
  • BZ - 1347923 - ganesha.enable remains on in volume info file even after we disable nfs-ganesha on the cluster.
  • BZ - 1348949 - ganesha/scripts : [RFE] store volume related configuration in shared storage
  • BZ - 1348954 - ganesha/glusterd : remove 'HA_VOL_SERVER' from ganesha-ha.conf
  • BZ - 1348962 - ganesha/scripts : copy modified export file during refresh-config
  • BZ - 1351589 - [RFE] Eventing for Gluster
  • BZ - 1351732 - gluster volume status <volume> client" isn't showing any information when one of the nodes in a 3-way Distributed-Replicate volume is shut down
  • BZ - 1351825 - yum groups install RH-Gluster-NFS-Ganesha fails due to outdated nfs-ganesha-nullfs
  • BZ - 1351949 - management connection loss when volfile-server goes down
  • BZ - 1352125 - Error: quota context not set inode (gfid:nnn) [Invalid argument]
  • BZ - 1352805 - [GSS] Rebalance crashed
  • BZ - 1353427 - [RFE] CLI to get local state representation for a cluster
  • BZ - 1354260 - quota : rectify quota-deem-statfs default value in gluster v set help command
  • BZ - 1356058 - glusterd doesn't scan for free ports from base range (49152) if last allocated port is greater than base port
  • BZ - 1356804 - Healing of one of the file not happened during upgrade from 3.0.4 to 3.1.3 ( In-service )
  • BZ - 1359180 - Make client.io-threads enabled by default
  • BZ - 1359588 - [Bitrot - RFE]: On demand scrubbing option to scrub
  • BZ - 1359605 - [RFE] Simplify Non Root Geo-replication Setup
  • BZ - 1359607 - [RFE] Non root Geo-replication Error logs improvements
  • BZ - 1359619 - [GSS]"gluster vol status all clients --xml" get malformed at times, causes gstatus to fail
  • BZ - 1360807 - [RFE] Generate events in GlusterD
  • BZ - 1360978 - [RFE]Reducing number of network round trips
  • BZ - 1361066 - [RFE] DHT Events
  • BZ - 1361068 - [RFE] Tier Events
  • BZ - 1361078 - [ RFE] Quota Events
  • BZ - 1361082 - [RFE]: AFR events
  • BZ - 1361084 - [RFE]: EC events
  • BZ - 1361086 - [RFE]: posix events
  • BZ - 1361098 - Feature: Entry self-heal performance enhancements using more granular changelogs
  • BZ - 1361101 - [RFE] arbiter for 3 way replication
  • BZ - 1361118 - [RFE] Geo-replication Events
  • BZ - 1361155 - Upcall related events
  • BZ - 1361170 - [Bitrot - RFE]: Bitrot Events
  • BZ - 1361184 - [RFE] Provide snapshot events for the new eventing framework
  • BZ - 1361513 - EC: Set/unset dirty flag for all the update operations
  • BZ - 1361519 - [Disperse] dd + rm + ls lead to IO hang
  • BZ - 1362376 - [RHEL7] Rebase glusterfs at RHGS-3.2.0 release
  • BZ - 1364422 - [libgfchangelog]: If changelogs are not available for the requested time range, no distinguished error
  • BZ - 1364551 - GlusterFS lost track of 7,800+ file paths preventing self-heal
  • BZ - 1366128 - "heal info --xml" not showing the brick name of offline bricks.
  • BZ - 1367382 - [RFE]: events from protocol server
  • BZ - 1367472 - [GSS]Quota version not changing in the quota.conf after upgrading to 3.1.1 from 3.0.x
  • BZ - 1369384 - [geo-replication]: geo-rep Status is not showing bricks from one of the nodes
  • BZ - 1369391 - configuration file shouldn't be marked as executable and systemd complains for it
  • BZ - 1370350 - Hosted Engine VM paused post replace-brick operation
  • BZ - 1371475 - [RFE] : Support SSL enabled volume via NFS Ganesha
  • BZ - 1373976 - [geo-rep]: defunct tar process while using tar+ssh sync
  • BZ - 1374166 - [GSS]deleted file from nfs-ganesha export goes in to .glusterfs/unlink in RHGS 3.1.3
  • BZ - 1375057 - [RHEL-7]Include vdsm and related dependency packages at RHGS 3.2.0 ISO
  • BZ - 1375465 - [RFE] Implement multi threaded self-heal for ec volumes
  • BZ - 1376464 - [RFE] enable sharding with virt profile - /var/lib/glusterd/groups/virt
  • BZ - 1377062 - /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory
  • BZ - 1377387 - glusterd experiencing repeated connect/disconnect messages when shd is down
  • BZ - 1378030 - glusterd fails to start without installing glusterfs-events package
  • BZ - 1378131 - [GSS] - Recording (ffmpeg) processes on FUSE get hung
  • BZ - 1378300 - Modifications to AFR Events
  • BZ - 1378342 - Getting "NFS Server N/A" entry in the volume status by default.
  • BZ - 1378484 - warning messages seen in glusterd logs for each 'gluster volume status' command
  • BZ - 1378528 - [SSL] glustershd disconnected from glusterd
  • BZ - 1378676 - "transport.address-family: inet" option is not showing in the Vol info for 3.1.3 volume after updating to 3.2.
  • BZ - 1378677 - "nfs.disable: on" is not showing in Vol info by default for the 3.1.3 volumes after updating to 3.2
  • BZ - 1378867 - Poor smallfile read performance on Arbiter volume compared to Replica 3 volume
  • BZ - 1379241 - qemu-img segfaults while creating qcow2 image on the gluster volume using libgfapi
  • BZ - 1379919 - VM errors out while booting from the image on gluster replica 3 volume with compound fops enabled
  • BZ - 1379924 - gfapi: Fix fd ref leaks
  • BZ - 1379963 - [SELinux] [Eventing]: gluster-eventsapi shows a traceback while adding a webhook
  • BZ - 1379966 - Volume restart couldn't re-export the volume exported via ganesha.
  • BZ - 1380122 - Labelled geo-rep checkpoints hide geo-replication status
  • BZ - 1380257 - [RFE] eventsapi/georep: Events are not available for Checkpoint and Status Change
  • BZ - 1380276 - Poor write performance with arbiter volume after enabling sharding on arbiter volume
  • BZ - 1380419 - gNFS: Revalidate lookup of a file in case of gfid mismatch
  • BZ - 1380605 - Error and warning message getting while removing glusterfs-events-3.8.4-2 package
  • BZ - 1380619 - Ganesha crashes with segfault while doing refresh-config with 3.2 builds.
  • BZ - 1380638 - Files not being opened with o_direct flag during random read operation (Glusterfs 3.8.2)
  • BZ - 1380655 - Continuous errors getting in the mount log when the volume mount server glusterd is down.
  • BZ - 1380710 - invalid argument warning messages seen in fuse client logs 2016-09-30 06:34:58.938667] W [dict.c:418ict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x58722) 0-dict: !this || !value for key=link-count [Invalid argument]
  • BZ - 1380742 - Some tests in pynfs test suite fails with latest 3.2 builds.
  • BZ - 1381140 - OOM kill of glusterfs fuse mount process seen on both the clients with one doing rename and the other doing delete of same files
  • BZ - 1381353 - Ganesha crashes on volume restarts
  • BZ - 1381452 - OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
  • BZ - 1381822 - glusterd.log is flooded with socket.management: accept on 11 failed (Too many open files) and glusterd service stops
  • BZ - 1381831 - dom_md/ids is always reported in the self-heal info
  • BZ - 1381968 - md-cache: Invalidate cache entry in case of OPEN with O_TRUNC
  • BZ - 1382065 - SAMBA-ClientIO-Thread : Samba crashes with segfault while doing multiple mount & unmount of volume share with 3.2 builds
  • BZ - 1382277 - Incorrect volume type in the "glusterd_state" file generated using CLI "gluster get-state"
  • BZ - 1382345 - [RHEL7] SELinux prevents starting of RDMA transport type volumes
  • BZ - 1384070 - inconsistent file permissions b/w write permission and sticky bits(---------T ) displayed when IOs are going on with md-cache enabled (and within the invalidation cycle)
  • BZ - 1384311 - [Eventing]: 'gluster vol bitrot <volname> scrub ondemand' does not produce an event
  • BZ - 1384316 - [Eventing]: Events not seen when command is triggered from one of the peer nodes
  • BZ - 1384459 - Track the client that performed readdirp
  • BZ - 1384460 - segment fault while join thread reaper_thr in fini()
  • BZ - 1384481 - [SELinux] Snaphsot : Seeing AVC denied messages generated when snapshot and clones are created
  • BZ - 1384865 - USS: Snapd process crashed ,doing parallel clients operations
  • BZ - 1384993 - refresh-config fails and crashes ganesha when mdcache is enabled on the volume.
  • BZ - 1385468 - During rebalance continuous "table not found" warning messages are seen in rebalance logs
  • BZ - 1385474 - [granular entry sh] - Provide a CLI to enable/disable the feature that checks that there are no heals pending before allowing the operation
  • BZ - 1385525 - Continuous warning messages getting when one of the cluster node is down on SSL setup.
  • BZ - 1385561 - [Eventing]: BRICK_CONNECTED and BRICK_DISCONNECTED events seen at every heartbeat when a brick-is-killed/volume-stopped
  • BZ - 1385605 - fuse mount point not accessible
  • BZ - 1385606 - 4 of 8 bricks (2 dht subvols) crashed on systemic setup
  • BZ - 1386127 - Remove-brick status output is showing status of fix-layout instead of original remove-brick status output
  • BZ - 1386172 - [Eventing]: UUID is showing zeros in the event message for the peer probe operation.
  • BZ - 1386177 - SMB[md-cache]:While multiple connect and disconnect of samba share hang is seen and other share becomes inaccessible
  • BZ - 1386185 - [Eventing]: 'gluster volume tier <volname> start force' does not generate a TIER_START event
  • BZ - 1386280 - Rebase of redhat-release-server to that of RHEL-7.3
  • BZ - 1386366 - The FUSE client log is filling up with posix_acl_default and posix_acl_access messages
  • BZ - 1386472 - [Eventing]: 'VOLUME_REBALANCE' event messages have an incorrect volume name
  • BZ - 1386477 - [Eventing]: TIER_DETACH_FORCE and TIER_DETACH_COMMIT events seen even after confirming negatively
  • BZ - 1386538 - pmap_signin event fails to update brickinfo->signed_in flag
  • BZ - 1387152 - [Eventing]: Random VOLUME_SET events seen when no operation is done on the gluster cluster
  • BZ - 1387204 - [md-cache]: All bricks crashed while performing symlink and rename from client at the same time
  • BZ - 1387205 - SMB:[MD-Cache]:while connecting and disconnecting samba share multiple times from a windows client , saw multiple crashes
  • BZ - 1387501 - Asynchronous Unsplit-brain still causes Input/Output Error on system calls
  • BZ - 1387544 - [Eventing]: BRICK_DISCONNECTED events seen when a tier volume is stopped
  • BZ - 1387558 - libgfapi core dumps
  • BZ - 1387563 - [RFE]: md-cache performance enhancement
  • BZ - 1388464 - throw warning to show that older tier commands are depricated and will be removed.
  • BZ - 1388560 - I/O Errors seen while accessing VM images on gluster volumes using libgfapi
  • BZ - 1388711 - Needs more testings of rebalance for the distributed-dispersed volume
  • BZ - 1388734 - glusterfs can't self heal character dev file for invalid dev_t parameters
  • BZ - 1388755 - Checkpoint completed event missing master node detail
  • BZ - 1389168 - glusterd: Display proper error message and fail the command if S32gluster_enable_shared_storage.sh hook script is not present during gluster volume set all cluster.enable-shared-storage <enable/disable> command
  • BZ - 1389422 - SMB[md-cache Private Build]:Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null
  • BZ - 1389661 - Refresh config fails while exporting subdirectories within a volume
  • BZ - 1390843 - write-behind: flush stuck by former failed write
  • BZ - 1391072 - SAMBA : Unable to play video files in samba share mounted over windows system
  • BZ - 1391093 - [Samba-Crash] : Core logs were generated while working on random IOs
  • BZ - 1391808 - [setxattr_cbk] "Permission denied" warning messages are seen in logs while running pjd-fstest suite
  • BZ - 1392299 - [SAMBA-mdcache]Read hungs and leads to disconnect of samba share while creating IOs from one client & reading from another client
  • BZ - 1392761 - During sequential reads backtraces are seen leading to IO hung
  • BZ - 1392837 - A hard link is lost during rebalance+lookup
  • BZ - 1392895 - Failed to enable nfs-ganesha after disabling nfs-ganesha cluster
  • BZ - 1392899 - stat of file is hung with possible deadlock
  • BZ - 1392906 - Input/Output Error seen while running iozone test on nfs-ganesha+mdcache enabled volume.
  • BZ - 1393316 - OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
  • BZ - 1393526 - [Ganesha] : Ganesha crashes intermittently during nfs-ganesha restarts.
  • BZ - 1393694 - The directories get renamed when data bricks are offline in 4*(2+1) volume
  • BZ - 1393709 - [Compound FOPs] Client side IObuff leaks at a high pace consumes complete client memory and hence making gluster volume inaccessible
  • BZ - 1393758 - I/O errors on FUSE mount point when reading and writing from 2 clients
  • BZ - 1394219 - Better logging when reporting failures of the kind "<file-path> Failing MKNOD as quorum is not met"
  • BZ - 1394752 - Seeing error messages [snapview-client.c:283:gf_svc_lookup_cbk] and [dht-helper.c:1666ht_inode_ctx_time_update] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x5d75c)
  • BZ - 1395539 - ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
  • BZ - 1395541 - Lower version of package "redhat-release-server" is present in the RHEL7 RHGS3.2 ISO
  • BZ - 1395574 - netstat: command not found message is seen in /var/log/messages when IO's are running.
  • BZ - 1395603 - [RFE] JSON output for all Events CLI commands
  • BZ - 1395613 - Delayed Events if any one Webhook is slow
  • BZ - 1396166 - self-heal info command hangs after triggering self-heal
  • BZ - 1396361 - Scheduler : Scheduler should not depend on glusterfs-events package
  • BZ - 1396449 - [SAMBA-CIFS] : IO hungs in cifs mount while graph switch on & off
  • BZ - 1397257 - capture volume tunables in get-state dump
  • BZ - 1397267 - File creation fails with Input/output error + FUSE logs throws "invalid argument: inode [Invalid argument]"
  • BZ - 1397286 - Wrong value in Last Synced column during Hybrid Crawl
  • BZ - 1397364 - [compound FOPs]: file operation hangs with compound fops
  • BZ - 1397430 - PEER_REJECT, EVENT_BRICKPATH_RESOLVE_FAILED, EVENT_COMPARE_FRIEND_VOLUME_FAILED are not seen
  • BZ - 1397450 - NFS-Ganesha:Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
  • BZ - 1397681 - [Eventing]: EVENT_POSIX_HEALTH_CHECK_FAILED event not seen when brick underlying filesystem crashed
  • BZ - 1397846 - [Compound FOPS]: seeing lot of brick log errors saying matching lock not found for unlock
  • BZ - 1398188 - [Arbiter] IO's Halted and heal info command hung
  • BZ - 1398257 - [GANESHA] Export ID changed during volume start and stop with message "lookup_export failed with Export id not found" in ganesha.log
  • BZ - 1398261 - After ganesha node reboot/shutdown, portblock process goes to FAILED state
  • BZ - 1398311 - [compound FOPs]:in replica pair one brick is down the other Brick process and fuse client process consume high memory at a increasing pace
  • BZ - 1398315 - [compound FOPs]: Memory leak while doing FOPs with brick down
  • BZ - 1398331 - With compound fops on, client process crashes when a replica is brought down while IO is in progress
  • BZ - 1398798 - [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts
  • BZ - 1399100 - GlusterFS client crashes during remove-brick operation
  • BZ - 1399105 - possible memory leak on client when writing to a file while another client issues a truncate
  • BZ - 1399476 - IO got hanged while doing in-service update from 3.1.3 to 3.2
  • BZ - 1399598 - [USS,SSL] .snaps directory is not reachable when I/O encryption (SSL) is enabled
  • BZ - 1399698 - AVCs seen when ganesha cluster nodes are rebooted
  • BZ - 1399753 - "Insufficient privileges" messages observed in pcs status for nfs_unblock resource agent [RHEL7]
  • BZ - 1399757 - Ganesha services are not stopped when pacemaker quorum is lost
  • BZ - 1400037 - [Arbiter] Fixed layout failed on the volume after remove-brick while rmdir is progress
  • BZ - 1400057 - self-heal not happening, as self-heal info lists the same pending shards to be healed
  • BZ - 1400068 - [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
  • BZ - 1400093 - ls and move hung on disperse volume
  • BZ - 1400395 - Memory leak in client-side background heals.
  • BZ - 1400599 - [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha/ in already existing cluster nodes
  • BZ - 1401380 - [Compound FOPs] : Memory leaks while doing deep directory creation
  • BZ - 1401806 - [GANESHA] Volume restart(stop followed by start) does not reexporting the volume
  • BZ - 1401814 - [Arbiter] Directory lookup failed with 11(EAGAIN) leading to rebalance failure
  • BZ - 1401817 - glusterfsd crashed while taking snapshot using scheduler
  • BZ - 1401869 - Rebalance not happened, which triggered after adding couple of bricks.
  • BZ - 1402360 - CTDB:NFS: CTDB failover doesn't work because of SELinux AVC's
  • BZ - 1402683 - Installation of latest available RHGS3.2 RHEL7 ISO is failing with error in the "SOFTWARE SELECTION"
  • BZ - 1402774 - Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
  • BZ - 1403120 - Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
  • BZ - 1403672 - Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
  • BZ - 1403770 - Incorrect incrementation of volinfo refcnt during volume start
  • BZ - 1403840 - [GSS]xattr 'replica.split-brain-status' shows the file is in data-splitbrain but "heal split-brain latest-mtime" fails
  • BZ - 1404110 - [Eventing]: POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op
  • BZ - 1404541 - Found lower version of packages "python-paramiko", "python-httplib2" and "python-netifaces" in the latest RHGS3.2 RHEL7 ISO.
  • BZ - 1404569 - glusterfs-rdma package is not pulled while doing layered installation of RHGS 3.2 on RHEL7 and not present in RHGS RHEL7 ISO also by default and vdsm-cli pkg not pulled during lay.. Installation
  • BZ - 1404633 - GlusterFS process crashed after add-brick
  • BZ - 1404982 - VM pauses due to storage I/O error, when one of the data brick is down with arbiter volume/replica volume
  • BZ - 1404989 - Fail add-brick command if replica count changes
  • BZ - 1404996 - gNFS: nfs.disable option to be set to off on existing volumes after upgrade to 3.2 and on for new volumes on 3.2
  • BZ - 1405000 - Remove-brick rebalance failed while rm -rf is in progress
  • BZ - 1405299 - fuse mount crashed when VM installation is in progress & one of the brick killed
  • BZ - 1405302 - vm does not boot up when first data brick in the arbiter volume is killed.
  • BZ - 1406025 - [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ganesha/ganesha.conf file
  • BZ - 1406322 - repeated operation failed warnings in gluster mount logs with disperse volume
  • BZ - 1406401 - [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
  • BZ - 1406723 - [Perf] : significant Performance regression seen with disperse volume when compared with 3.1.3
  • BZ - 1408112 - [Arbiter] After Killing a brick writes drastically slow down
  • BZ - 1408413 - [ganesha + EC]posix compliance rename tests failed on EC volume with nfs-ganesha mount.
  • BZ - 1408426 - with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
  • BZ - 1408576 - [Ganesha+SSL] : Bonnie++ hangs during rewrites.
  • BZ - 1408639 - [Perf] : Sequential Writes are off target by 12% on EC backed volumes over FUSE
  • BZ - 1408641 - [Perf] : Sequential Writes have regressed by ~25% on EC backed volumes over SMB3
  • BZ - 1408655 - [Perf] : mkdirs are 85% slower on EC
  • BZ - 1408705 - [GNFS+EC] Cthon failures/issues with Lock/Special Test cases on disperse volume with GNFS mount
  • BZ - 1408836 - [ganesha+ec]: Contents of original file are not seen when hardlink is created
  • BZ - 1409135 - [Replicate] "RPC call decoding failed" leading to IO hang & mount inaccessible
  • BZ - 1409472 - brick crashed on systemic setup due to eventing regression
  • BZ - 1409563 - [SAMBA-SSL] Volume Share hungs when multiple mount & unmount is performed over a windows client on a SSL enabled cluster
  • BZ - 1409782 - NFS Server is not coming up for the 3.1.3 volume after updating to 3.2.0 ( latest available build )
  • BZ - 1409808 - [Mdcache] clients being served wrong information about a file, can lead to file inconsistency
  • BZ - 1410025 - Extra lookup/fstats are sent over the network when a brick is down.
  • BZ - 1410406 - ganesha service crashed on all nodes of ganesha cluster on disperse volume when doing lookup while copying files remotely using scp
  • BZ - 1411270 - [SNAPSHOT] With all USS plugin enable .snaps directory is not visible in cifs mount as well as windows mount
  • BZ - 1411329 - OOM kill of glusterfsd during continuous add-bricks
  • BZ - 1411617 - Spurious split-brain error messages are seen in rebalance logs
  • BZ - 1412554 - [RHV-RHGS]: Application VM paused after add brick operation and VM didn't comeup after power cycle.
  • BZ - 1412955 - Quota: After upgrade from 3.1.3 to 3.2 , gluster quota list command shows "No quota configured on volume repvol"
  • BZ - 1413351 - [Scale] : Brick process oom-killed and rebalance failed.
  • BZ - 1413513 - glusterfind: After glusterfind pre command execution all temporary files and directories /usr/var/lib/misc/glusterfsd/glusterfind/<session>/<volume>/ should be removed
  • BZ - 1414247 - client process crashed due to write behind translator
  • BZ - 1414663 - [GANESHA] Cthon lock test case is failing on nfs-ganesha mounted Via V3
  • BZ - 1415101 - glustershd process crashed on systemic setup
  • BZ - 1415583 - [Stress] : SHD Logs flooded with "Heal Failed" messages,filling up "/" quickly
  • BZ - 1417177 - Split brain resolution must check for all the bricks to be up to avoiding serving of inconsistent data(visible on x3 or more)
  • BZ - 1417955 - [RFE] Need to have group cli option to set all md-cache options using a single command
  • BZ - 1418011 - [RFE] disable client.io-threads on replica volume creation
  • BZ - 1418603 - Lower version packages ( heketi, libtiff ) present in RHGS3.2.0 RHEL7 ISO.
  • BZ - 1418901 - Include few more options in virt file
  • BZ - 1419859 - [Perf] : Renames are off target by 28% on EC FUSE mounts
  • BZ - 1420324 - [GSS] The bricks once disconnected not connects back if SSL is enabled
  • BZ - 1420635 - Modified volume options not synced once offline nodes comes up.
  • BZ - 1422431 - multiple glusterfsd process crashed making the complete subvolume unavailable
  • BZ - 1422576 - [RFE]: provide an cli option to reset the stime while deleting the geo-rep session
  • BZ - 1425740 - Disconnects in nfs mount leads to IO hang and mount inaccessible
  • BZ - 1426324 - common-ha: setup after teardown often fails
  • BZ - 1426559 - heal info is not giving correct output
  • BZ - 1427783 - Improve read performance on tiered volumes

CVEs

  • CVE-2015-1795

References

  • https://access.redhat.com/security/updates/classification/#moderate
  • https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/3.2_release_notes/
Note: More recent versions of these packages may be available. Click a package name for more details.

Red Hat Enterprise Linux Server 7

SRPM
glusterfs-3.8.4-18.el7.src.rpm SHA-256: 27620c06c4719c201a7104fe1b5df274d9e78d171ef8e4f4a2ac171b80bbfe7e
x86_64
glusterfs-3.8.4-18.el7.x86_64.rpm SHA-256: 7096bf84a30022edaa89ff39928e8bb59ea262d1e1acb0dca09c4e7609ec71a2
glusterfs-api-3.8.4-18.el7.x86_64.rpm SHA-256: 74c29b2eb2de86c1317c8588f66780493d0a24528169f108bf4901725f2ddedd
glusterfs-api-devel-3.8.4-18.el7.x86_64.rpm SHA-256: 26e3ebc9910072950a27a594c34074b50f01dce8cd743a67ff0f93a5354c2ae4
glusterfs-cli-3.8.4-18.el7.x86_64.rpm SHA-256: ea9f0c4640a5fa04eb570e57cd43dde47e443c43eea44e5408f1757183eeaba4
glusterfs-client-xlators-3.8.4-18.el7.x86_64.rpm SHA-256: e000862cedaffdfe6c0ebb156ad8db63c3ad7a2e1190f06783d097c19b7503ef
glusterfs-debuginfo-3.8.4-18.el7.x86_64.rpm SHA-256: 407f91350bb661624f9cc9277400ebb3e5596657875f8f21ab8eba24475b551b
glusterfs-devel-3.8.4-18.el7.x86_64.rpm SHA-256: cb9401db49d1bf9448da8db013c630f5d5390e6c77dd8d65e9b4e45f342211d7
glusterfs-fuse-3.8.4-18.el7.x86_64.rpm SHA-256: 6d79c409dbe3fb47fca93c5f9f9c1ce23fcaf50f84c1abcc8f18a6231cd92988
glusterfs-libs-3.8.4-18.el7.x86_64.rpm SHA-256: 4192e8f9827e398633752741d32e8372ed753c0e9216660a101b18fd8e3da1de
glusterfs-rdma-3.8.4-18.el7.x86_64.rpm SHA-256: 3fea6e9d114fba6ad779389c8ef65fa5189af25523597fa304511e6491e96b64
python-gluster-3.8.4-18.el7.noarch.rpm SHA-256: 50e73082f878b05d63ebcd3df47733ad9e82f222623233b5cee9b96d4f86c0b2

Red Hat Virtualization 4 for RHEL 7

SRPM
glusterfs-3.8.4-18.el7.src.rpm SHA-256: 27620c06c4719c201a7104fe1b5df274d9e78d171ef8e4f4a2ac171b80bbfe7e
x86_64
glusterfs-3.8.4-18.el7.x86_64.rpm SHA-256: 7096bf84a30022edaa89ff39928e8bb59ea262d1e1acb0dca09c4e7609ec71a2
glusterfs-api-3.8.4-18.el7.x86_64.rpm SHA-256: 74c29b2eb2de86c1317c8588f66780493d0a24528169f108bf4901725f2ddedd
glusterfs-api-devel-3.8.4-18.el7.x86_64.rpm SHA-256: 26e3ebc9910072950a27a594c34074b50f01dce8cd743a67ff0f93a5354c2ae4
glusterfs-cli-3.8.4-18.el7.x86_64.rpm SHA-256: ea9f0c4640a5fa04eb570e57cd43dde47e443c43eea44e5408f1757183eeaba4
glusterfs-client-xlators-3.8.4-18.el7.x86_64.rpm SHA-256: e000862cedaffdfe6c0ebb156ad8db63c3ad7a2e1190f06783d097c19b7503ef
glusterfs-debuginfo-3.8.4-18.el7.x86_64.rpm SHA-256: 407f91350bb661624f9cc9277400ebb3e5596657875f8f21ab8eba24475b551b
glusterfs-devel-3.8.4-18.el7.x86_64.rpm SHA-256: cb9401db49d1bf9448da8db013c630f5d5390e6c77dd8d65e9b4e45f342211d7
glusterfs-fuse-3.8.4-18.el7.x86_64.rpm SHA-256: 6d79c409dbe3fb47fca93c5f9f9c1ce23fcaf50f84c1abcc8f18a6231cd92988
glusterfs-libs-3.8.4-18.el7.x86_64.rpm SHA-256: 4192e8f9827e398633752741d32e8372ed753c0e9216660a101b18fd8e3da1de
glusterfs-rdma-3.8.4-18.el7.x86_64.rpm SHA-256: 3fea6e9d114fba6ad779389c8ef65fa5189af25523597fa304511e6491e96b64
python-gluster-3.8.4-18.el7.noarch.rpm SHA-256: 50e73082f878b05d63ebcd3df47733ad9e82f222623233b5cee9b96d4f86c0b2

Red Hat Gluster Storage Server for On-premise 3 for RHEL 7

SRPM
glusterfs-3.8.4-18.el7rhgs.src.rpm SHA-256: a6e84f7094ca38a6c87736fbb27da67bdfdcccd9432fdb0722a41adbc756b249
redhat-storage-server-3.2.0.2-1.el7rhgs.src.rpm SHA-256: ea335da1cfe8fd83dfaeb2c4b28d9d383c331a96fa53510c76f6898a461d1cd8
vdsm-4.17.33-1.1.el7rhgs.src.rpm SHA-256: cb2a10e28fa4789cbb90fee8cc23b72654a6737407a890a0b7944997d15a4916
x86_64
glusterfs-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: ee9f376e62a3432623f767adeb68e03a74cabf4c11492dbeb7cac15a5aedac81
glusterfs-api-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: e9e0675ced3aeaa2ab69dd24c29a883cd5270081c44811b048538e6b0bd2f5f9
glusterfs-api-devel-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: b21bf4f1bca8b1d8a908b0482e031ec0757289c68dbabfae2b250461595d785d
glusterfs-cli-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: 05d336ca119c1aed5203605d0bf30af9672812dc53ead6ef6853d9b5230192fe
glusterfs-client-xlators-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: 8810467f68ed59c106966c5e91cc19494814632daae282c7ec97f3b3e1f8dd86
glusterfs-debuginfo-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: 48cf433dfca337650b67df94431a2a288388fcd4bd2e1f24707e0bc6d1d53473
glusterfs-devel-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: 6705eaa68505d2fe22528882351ea11535a49d9277a412472ce70d8bc274d537
glusterfs-events-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: 6c353153d099e616e8b1976a4eac0903211e791947071320789140479e13906d
glusterfs-fuse-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: 8f3b61fcd98d5f4d228df5875e54a077a79cf5f93f0082fc136907f3f78cb7fa
glusterfs-ganesha-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: ba355105e017418f0fde400a41a7ad7ab9168ebc391bf73510aa57f48a735b24
glusterfs-geo-replication-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: 44b53d1ef26d56cdc7786c96ab7d640507d53b64c00c906241ea667c47437332
glusterfs-libs-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: c01551ac85a94aea1f921a509f82e1bdb17cf7bfe370ec1987329620023203a5
glusterfs-rdma-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: 8cde8cc6e790aae32b3c63846f6576a324ad2608f4773652b6b6b1ea951f271f
glusterfs-server-3.8.4-18.el7rhgs.x86_64.rpm SHA-256: aa07d03c7edc3d017bcad2ff12bafeb51f5c32d0c0aa41527ca27552e20c6cbc
python-gluster-3.8.4-18.el7rhgs.noarch.rpm SHA-256: 06aa3b38f078c2cc8e6c9bf0a026e4c2a03416eb01ab5d50ce2a2cba8414a7ec
redhat-storage-server-3.2.0.2-1.el7rhgs.noarch.rpm SHA-256: e89720738da300fdc534c93962dcd1da7d49d0ee46091ca6aff6df3145617fc1
vdsm-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: 165e01b1ec0393d3721c2ed7aef5f65afd1241f78a1f99a4f8bae00f0365c0e4
vdsm-cli-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: 51809f26bbdb2d8d9e57b026d788d2f861df155c337a11abb4b6e85f59f4928d
vdsm-debug-plugin-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: 92388dde10ae740d4eea967be8220464430a171fc9120786cd25b8b2de9970c4
vdsm-gluster-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: 66f5762bd86c1d8916238325409c9f5a45f704a8a82d3f661c91a82a4d7e55b9
vdsm-hook-ethtool-options-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: 977158e808d5ecd7d795e5408966a4e7d6b911d3476fd2748a5e2388546f3ddd
vdsm-hook-faqemu-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: e5c583696793e97de6891ba8b01f60b27d6f9c51692637e44f6f8ba4471773e2
vdsm-hook-openstacknet-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: 6f4f4856b534a95ebcc6398062f4bc8b1c1b38ee9b123d02f6578554c3005c04
vdsm-hook-qemucmdline-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: aea5cf3d3d1b4b9e28673c3c733692a2647a35f9fbe6274625c6a659dd8159ea
vdsm-infra-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: c83b0f99848f65a82763d78c845f50d3230c067421b51f34acd970a1f4a40baf
vdsm-jsonrpc-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: c69a31353e9800bd7af56a8db15e47914eb768985fb2d0e246cb733e8fcc8db5
vdsm-python-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: 976ab6abe1e7f4b4a529e3d0e982a7346a67abbaca01522e20cb5ec3b55ef783
vdsm-tests-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: 5bde795f820ea38d2438e2144edf2c1c6fe78fcdb777eb976d250d6831e233cd
vdsm-xmlrpc-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: a02db526301e26cd4b7475c49b25074478cd27978aa79ce1afa7af3bab22f4db
vdsm-yajsonrpc-4.17.33-1.1.el7rhgs.noarch.rpm SHA-256: e64c5b569d864b7bcbd1a379f965e6c16bce243c2eeb2b886b38ce8a50d21f91

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.

Red Hat

Quick Links

  • Downloads
  • Subscriptions
  • Support Cases
  • Customer Service
  • Product Documentation

Help

  • Contact Us
  • Customer Portal FAQ
  • Log-in Assistance

Site Info

  • Trust Red Hat
  • Browser Support Policy
  • Accessibility
  • Awards and Recognition
  • Colophon

Related Sites

  • redhat.com
  • openshift.com
  • developers.redhat.com
  • connect.redhat.com

About

  • Red Hat Subscription Value
  • About Red Hat
  • Red Hat Jobs
Copyright © 2021 Red Hat, Inc.
  • Privacy Statement
  • Customer Portal Terms of Use
  • All Policies and Guidelines
Red Hat Summit
Twitter Facebook