Skip to navigation Skip to main content

Utilities

  • Subscriptions
  • Downloads
  • Red Hat Console
  • Get Support
Red Hat Customer Portal
  • Subscriptions
  • Downloads
  • Red Hat Console
  • Get Support
  • Products

    Top Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    All Products

    Downloads and Containers

    • Downloads
    • Packages
    • Containers

    Top Resources

    • Documentation
    • Product Life Cycles
    • Product Compliance
    • Errata
  • Knowledge

    Red Hat Knowledge Center

    • Knowledgebase Solutions
    • Knowledgebase Articles
    • Customer Portal Labs
    • Errata

    Top Product Docs

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    All Product Docs

    Training and Certification

    • About
    • Course Index
    • Certification Index
    • Skill Assessment
  • Security

    Red Hat Product Security Center

    • Security Updates
    • Security Advisories
    • Red Hat CVE Database
    • Errata

    References

    • Security Bulletins
    • Security Measurement
    • Severity Ratings
    • Security Data

    Top Resources

    • Security Labs
    • Backporting Policies
    • Security Blog
  • Support

    Red Hat Support

    • Support Cases
    • Troubleshoot
    • Get Support
    • Contact Red Hat Support

    Red Hat Community Support

    • Customer Portal Community
    • Community Discussions
    • Red Hat Accelerator Program

    Top Resources

    • Product Life Cycles
    • Customer Portal Labs
    • Red Hat JBoss Supported Configurations
    • Red Hat Insights
Or troubleshoot an issue.

Select Your Language

  • English
  • Français
  • 한국어
  • 日本語
  • 中文 (中国)

Infrastructure and Management

  • Red Hat Enterprise Linux
  • Red Hat Satellite
  • Red Hat Subscription Management
  • Red Hat Insights
  • Red Hat Ansible Automation Platform

Cloud Computing

  • Red Hat OpenShift
  • Red Hat OpenStack Platform
  • Red Hat OpenShift
  • Red Hat OpenShift AI
  • Red Hat OpenShift Dedicated
  • Red Hat Advanced Cluster Security for Kubernetes
  • Red Hat Advanced Cluster Management for Kubernetes
  • Red Hat Quay
  • Red Hat OpenShift Dev Spaces
  • Red Hat OpenShift Service on AWS

Storage

  • Red Hat Gluster Storage
  • Red Hat Hyperconverged Infrastructure
  • Red Hat Ceph Storage
  • Red Hat OpenShift Data Foundation

Runtimes

  • Red Hat Runtimes
  • Red Hat JBoss Enterprise Application Platform
  • Red Hat Data Grid
  • Red Hat JBoss Web Server
  • Red Hat build of Keycloak
  • Red Hat support for Spring Boot
  • Red Hat build of Node.js
  • Red Hat build of Quarkus

Integration and Automation

  • Red Hat Application Foundations
  • Red Hat Fuse
  • Red Hat AMQ
  • Red Hat 3scale API Management
All Products
Red Hat Product Errata RHSA-2015:1845 - Security Advisory
Issued:
2015-10-05
Updated:
2015-10-05

RHSA-2015:1845 - Security Advisory

  • Overview
  • Updated Packages

Synopsis

Moderate: Red Hat Gluster Storage 3.1 update

Type/Severity

Security Advisory: Moderate

Red Hat Insights patch analysis

Identify and remediate systems affected by this advisory.

View affected systems

Topic

Red Hat Gluster Storage 3.1 Update 1, which fixes one security issue,
several bugs, and adds various enhancements, is now available for Red Hat
Enterprise Linux 6.

Red Hat Product Security has rated this update as having Moderate security
impact. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available from the CVE link in the
References section.

Description

Red Hat Gluster Storage is a software only scale-out storage solution that
provides flexible and affordable unstructured data storage. It unifies data
storage and infrastructure, increases performance, and improves
availability and manageability to meet enterprise-level storage challenges.

Red Hat Gluster Storage's Unified File and Object Storage is built on
OpenStack's Object Storage (swift).

A flaw was found in the metadata constraints in Red Hat Gluster Storage's
OpenStack Object Storage (swiftonfile). By adding metadata in several
separate calls, a malicious user could bypass the max_meta_count
constraint, and store more metadata than allowed by the configuration.
(CVE-2014-8177)

This update also fixes numerous bugs and adds various enhancements. Space
precludes documenting all of these changes in this advisory. Users are
directed to the Red Hat Gluster Storage 3.1 Technical Notes, linked to in
the References section, for information on the most significant of these
changes.

This advisory introduces the following new features:

  • Gdeploy is a tool which automates the process of creating, formatting,

and mounting bricks. When setting up a fresh cluster, gdeploy could be the
preferred choice of cluster set up, as manually executing numerous commands
can be error prone. The advantages of using gdeploy includes automated
brick creation, flexibility in choosing the drives to configure (sd, vd,
etc.), and flexibility in naming the logical volumes (LV) and volume groups
(VG). (BZ#1248899)

  • The gstatus command is now fully supported. The gstatus command provides

an easy-to-use, high-level view of the health of a trusted storage pool
with a single command. It gathers information about the health of a Red Hat
Gluster Storage trusted storage pool for distributed, replicated,
distributed-replicated, dispersed, and distributed-dispersed volumes.
(BZ#1250453)

  • You can now recover a bad file detected by BitRot from a replicated

volume. The information about the bad file will be logged in the scrubber
log file located at /var/log/glusterfs/scrub.log. (BZ#1238171)

  • Two tailored tuned profiles are introduced to improve the performance for

specific Red Hat Gluster Storage workloads. They are: rhgs-sequential-io,
which improves performance of large files with sequential I/O workloads,
and rhgs-random-io, which improves performance of small files with random
I/O workloads (BZ# 1251360)

All users of Red Hat Gluster Storage are advised to apply this update.

Solution

Before applying this update, make sure all previously released errata
relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Enterprise Linux Server 6 x86_64
  • Red Hat Gluster Storage Server for On-premise 3 for RHEL 6 x86_64
  • Red Hat Gluster Storage Nagios Server 3 for RHEL 6 x86_64

Fixes

  • BZ - 1027723 - Quota: volume-reset shouldn't remove quota-deem-statfs, unless explicitly specified, when quota is enabled.
  • BZ - 1064265 - quota: allowed to set soft-limit %age beyond 100%
  • BZ - 1076033 - Unknown Key: <bricks> are reported when the glusterd was restarted
  • BZ - 1091936 - Incase of ACL not set on a file, nfs4_getfacl should return a default acl
  • BZ - 1134288 - "Unable to get transaction opinfo for transaction ID" error messages in glusterd logs
  • BZ - 1178100 - [USS]: gluster volume reset <vol-name>, resets the uss configured option but snapd process continues to run
  • BZ - 1213893 - rebalance stuck at 0 byte when auth.allow is set
  • BZ - 1215816 - 1 mkdir generates tons of log messages from dht and disperse xlators
  • BZ - 1225452 - [remove-brick]: Creation of file from NFS writes to the decommissioned subvolume and subsequent lookup from fuse creates a link
  • BZ - 1226665 - gf_store_save_value fails to check for errors, leading to emptying files in /var/lib/glusterd/
  • BZ - 1226817 - nfs-ganesha: new volume creation tries to bring up glusterfs-nfs even when nfs-ganesha is already on
  • BZ - 1227724 - Quota: Used space of the volume is wrongly calculated
  • BZ - 1227759 - Write performance from a Windows client on 3-way replicated volume decreases substantially when one brick in the replica set is brought down
  • BZ - 1228135 - [Bitrot] Gluster v set <volname> bitrot enable command succeeds , which is not supported to enable bitrot
  • BZ - 1228158 - nfs-ganesha: error seen while delete node "Error: unable to create resource/fence device 'nfs5-cluster_ip-1', 'nfs5-cluster_ip-1' already exists on this system"
  • BZ - 1229606 - Quota: " E [quota.c:1197:quota_check_limit] 0-ecvol-quota: Failed to check quota size limit" in brick logs
  • BZ - 1229621 - Quota: Seeing error message in brick logs "E [posix-handle.c:157:posix_make_ancestryfromgfid] 0-vol0-posix: could not read the link from the gfid handle /rhs/brick1/b1/.glusterfs/a3/f3/a3f3664f-df98-448e-b5c8-924349851c7e (No such file or directory)"
  • BZ - 1231080 - Snapshot: When soft limit is reached, auto-delete is enable, create snapshot doesn't logs anything in log files
  • BZ - 1232216 - [geo-rep]: UnboundLocalError: local variable 'fd' referenced before assignment
  • BZ - 1232569 - [Backup]: Glusterfind list shows the session as corrupted on the peer node
  • BZ - 1234213 - [Backup]: Password of the peer nodes prompted whenever a glusterfind session is deleted.
  • BZ - 1234399 - `gluster volume heal <vol-name> split-brain' changes required for entry-split-brain
  • BZ - 1234610 - ACL created on a dht.linkto file on a files that skipped rebalance
  • BZ - 1234708 - Volume option cluster.enable-shared-storage is not listed in "volume set help-xml" output
  • BZ - 1235182 - quota: marker accounting miscalculated when renaming a file on with write is in progress
  • BZ - 1235571 - snapd crashed due to stack overflow
  • BZ - 1235971 - nfs-ganesha: ganesha-ha.sh --status is actually same as "pcs status"
  • BZ - 1236038 - Data Loss:Remove brick commit passing when remove-brick process has not even started(due to killing glusterd)
  • BZ - 1236546 - [geo-rep]: killing brick from replica pair makes geo-rep session faulty with Traceback "ChangelogException"
  • BZ - 1236672 - quota: brick crashes when create and remove performed in parallel
  • BZ - 1236990 - glfsheal crashed
  • BZ - 1238070 - snapd/quota/nfs runs on the RHGS node, even after that node was detached from trusted storage pool
  • BZ - 1238071 - Quota: Quota Daemon doesn't start after node reboot
  • BZ - 1238111 - Detaching a peer from the cluster doesn't remove snap related info and peer probe initiated from that node fails
  • BZ - 1238116 - Gluster-swift object server leaks fds in failure cases (when exceptions are raised)
  • BZ - 1238118 - nfs-ganesha: coredump for ganesha process post executing the volume start twice
  • BZ - 1238147 - Object expirer daemon times out and raises exception while attempting to expire a million objects
  • BZ - 1238171 - Not able to recover the corrupted file on Replica volume
  • BZ - 1238398 - Unable to examine file in metadata split-brain after setting `replica.split-brain-choice' attribute to a particular replica
  • BZ - 1238977 - Scrubber log should mark file corrupted message as Alert not as information
  • BZ - 1239021 - AFR: gluster v restart force or brick process restart doesn't heal the files
  • BZ - 1239075 - [geo-rep]: rename followed by deletes causes ESTALE
  • BZ - 1240614 - Gluster nfs started running on one of the nodes of ganesha cluster, even though ganesha was running on it
  • BZ - 1240657 - Deceiving log messages like "Failing STAT on gfid : split-brain observed. [Input/output error]" reported
  • BZ - 1241385 - [Backup]: Glusterfind pre attribute '--output-prefix' not working as expected in case of DELETEs
  • BZ - 1241761 - nfs-ganesha: coredump "pthread_spin_lock () from /lib64/libpthread.so.0"
  • BZ - 1241807 - Brick crashed after a complete node failure
  • BZ - 1241862 - EC volume: Replace bricks is not healing version of root directory
  • BZ - 1241871 - Symlink mount fails for nfs-ganesha volume
  • BZ - 1242803 - Quota list on a volume hangs after glusterd restart an a node.
  • BZ - 1243542 - [RHEV-RHGS] App VMs paused due to IO error caused by split-brain, after initiating remove-brick operation
  • BZ - 1243722 - glusterd crashed when a client which doesn't support SSL tries to mount a SSL enabled gluster volume
  • BZ - 1243886 - huge mem leak in posix xattrop
  • BZ - 1244415 - Enabling management SSL on a gluster cluster already configured can crash glusterd
  • BZ - 1244527 - DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped on peer node
  • BZ - 1245162 - python-argparse not installed as a dependency package
  • BZ - 1245165 - Some times files are not getting signed
  • BZ - 1245536 - [RHGS-AMI] Same UUID generated across instances
  • BZ - 1245542 - quota/marker: errors in log file 'Failed to get metadata for'
  • BZ - 1245897 - gluster snapshot status --xml gives back unexpected non xml output
  • BZ - 1245915 - snap-view:mount crash if debug mode is enabled
  • BZ - 1245919 - USS: Take ref on root inode
  • BZ - 1245924 - [Snapshot] Scheduler should check vol-name exists or not before adding scheduled jobs
  • BZ - 1246946 - critical message seen in glusterd log file, when detaching a peer, but no functional loss
  • BZ - 1247445 - [upgrade] After in-service software upgrade from RHGS 2.1 to RHGS 3.1, self-heal daemon is not coming online
  • BZ - 1247537 - yum groups for RHGS Server and Console are listed under Available Language Groups instead of Available groups
  • BZ - 1248899 - [Feature 3.1.1 gdeploy] Develop tool to setup thinp backend and create Gluster volumes
  • BZ - 1249989 - [GSS] python-gluster packages not being treated as dependent package for gluster-swift packages
  • BZ - 1250453 - [Feature]: Qualify gstatus to 3.1.1 release
  • BZ - 1250821 - [RHGS 3.1 RHEL-7 AMI] RHEL-7 repo disabled by default, NFS and samba repos enabled by default
  • BZ - 1251360 - Update RHGS tuned profiles for RHEL-6
  • BZ - 1251925 - .trashcan is listed as container and breaks object expiration in gluster-swift
  • BZ - 1253141 - [RHGS-AMI] RHUI repos not accessible on RHGS-3.1 RHEL-7 AMI
  • BZ - 1254432 - gstatus: Overcommit field show wrong information when one of the node is down
  • BZ - 1254514 - gstatus: Status message doesn;t show the storage node name which is down
  • BZ - 1254866 - gstatus: Running gstatus with -b option gives error
  • BZ - 1254991 - gdeploy: unmount doesn't remove fstab entries
  • BZ - 1255015 - gdeploy: unmount fails with fstype parameter
  • BZ - 1255308 - Inconsistent data returned when objects are modified from file interface
  • BZ - 1255471 - [libgfapi] crash when NFS Ganesha Volume is 100% full
  • BZ - 1257099 - gdeploy: checks missing for brick mounts when there are existing physical volumes
  • BZ - 1257162 - gdeploy: volume force option doesn't work as expected
  • BZ - 1257468 - gdeploy: creation of thin pool stuck after brick cleanup
  • BZ - 1257509 - Disperse volume: df -h on a nfs mount throws Invalid argument error
  • BZ - 1257525 - CVE-2014-8177 gluster-swift metadata constraints are not correctly enforced
  • BZ - 1258434 - gdeploy: peer probe issues during an add-brick operation with fresh hosts
  • BZ - 1258810 - gdeploy: change all references to brick_dir in config file
  • BZ - 1258821 - gdeploy: inconsistency in the way backend setup and volume creation uses brick_dirs value
  • BZ - 1259750 - DHT: Few files are missing after remove-brick operation
  • BZ - 1260086 - snapshot: from nfs-ganesha mount no content seen in .snaps/<snapshot-name> directory
  • BZ - 1260982 - gdeploy: ENOTEMPTY errors when gdeploy fails
  • BZ - 1262236 - glusterd: disable ping timer b/w glusterd and make epoll thread count default 1
  • BZ - 1262291 - `getfattr -n replica.split-brain-status <file>' command hung on the mount
  • BZ - 1263094 - nfs-ganesha crashes due to usage of invalid fd in glfs_close
  • BZ - 1263581 - nfs-ganesha: nfsd coredumps once quota limits cross while creating a file larger than the quota limit set
  • BZ - 1263653 - dht: Avoid double unlock in dht_refresh_layout_cbk

CVEs

  • CVE-2014-8177
  • CVE-2015-1856

References

  • https://access.redhat.com/security/updates/classification/#moderate
  • https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Technical_Notes/index.html
Note: More recent versions of these packages may be available. Click a package name for more details.

Red Hat Enterprise Linux Server 6

SRPM
glusterfs-3.7.1-16.el6.src.rpm SHA-256: bf77a13c1a8e73a762a146c6b63a2337a5ab9627896e3f36dc533ebd187fc152
x86_64
glusterfs-3.7.1-16.el6.x86_64.rpm SHA-256: 7793f5cd6413ccf2befad03866fa004018b20e48bb66adede9fe19ecca1ce4e8
glusterfs-api-3.7.1-16.el6.x86_64.rpm SHA-256: ec5658608cd0962a42791cab2d1ea90960d256c3fa6acbbdcd64e9c726f2a89c
glusterfs-api-devel-3.7.1-16.el6.x86_64.rpm SHA-256: e332c63f820b1f785e385b139a762fd5676d1f2ab2f628963e84a5a3ebafe7a6
glusterfs-cli-3.7.1-16.el6.x86_64.rpm SHA-256: 56487987bf59d42a0bf49051b1b104e40fb1fbe5d446cb9c233b92aa789efa12
glusterfs-client-xlators-3.7.1-16.el6.x86_64.rpm SHA-256: 59b4132c40ebaae207b864a47619f051dd0fe10ec08ff4c8967dc088d07177a9
glusterfs-debuginfo-3.7.1-16.el6.x86_64.rpm SHA-256: 1e49822ba75308ba729b40dab66d50f38841524636f6b6005d193cd44233e92b
glusterfs-devel-3.7.1-16.el6.x86_64.rpm SHA-256: 42cbc9b383b43f4dabebe1c9593d39a1e2252fedf297399ae969206d2c749ffb
glusterfs-fuse-3.7.1-16.el6.x86_64.rpm SHA-256: bc41ed3ef35b90eebce5d7972498bb8188248ed55817bbf8c7322f064a75bb4f
glusterfs-libs-3.7.1-16.el6.x86_64.rpm SHA-256: cd0b440a8c3b3423d4e6b4c8b9599098a4b537d7f3278f471635f581613b35c1
glusterfs-rdma-3.7.1-16.el6.x86_64.rpm SHA-256: 7e7a4b6cf42e5c4d3a1aa5c9744cee25debab336657b9fd910f9bf76e6a1789e
python-gluster-3.7.1-16.el6.x86_64.rpm SHA-256: bd095fa0ce36f29f272cf891e0efe63abd03c62787e613579ab5f2bde0287c2d

Red Hat Gluster Storage Server for On-premise 3 for RHEL 6

SRPM
gdeploy-1.0-12.el6rhs.src.rpm SHA-256: 70d6f45b864f2ac3afe70258c1cb6243d2850bedd5fc55d757ea3b0172a85252
gluster-nagios-addons-0.2.5-1.el6rhs.src.rpm SHA-256: c8ecc64943a39ddb72c2b4d4aed0759a4dbbee28fb0c02629b8fd86d5d32d194
gluster-nagios-common-0.2.2-1.el6rhs.src.rpm SHA-256: f164c94ea205b3bd9674715b763d4b9bc22d377f42ec2d625177fc86b68983fd
glusterfs-3.7.1-16.el6rhs.src.rpm SHA-256: cba1e338903dd35cb84fdd9733c807df10cf9cb3b07f41dd05f61d2e2fc5ca5a
gstatus-0.65-1.el6rhs.src.rpm SHA-256: 6dba6aebbccbce0fff05d8a79dea6eed38e297aecde01919ca17c41ff854cde2
openstack-swift-1.13.1-6.el6ost.src.rpm SHA-256: f28376c28aa60efad065a283dcbde0d3ca424734c9632b31fe946fad4464035e
redhat-storage-server-3.1.1.0-2.el6rhs.src.rpm SHA-256: 30c919e64e8bbbd1d47dc0b4b78735b221b4112f54258eb2d1df7e61bfbb5430
swiftonfile-1.13.1-5.el6rhs.src.rpm SHA-256: 05a1ced9a09d8845154df13237ea5a0f68642f950e61b33bc6e54417ed1cbc8f
vdsm-4.16.20-1.3.el6rhs.src.rpm SHA-256: 3074fb13731db870a2c2b4396857f4203e57a7470c9e33263b834674e2cc7d91
x86_64
gdeploy-1.0-12.el6rhs.noarch.rpm SHA-256: df0d345d82b2393974a583ad179ae2b9c7b9f36f3609b126166ca411bdd0b789
gluster-nagios-addons-0.2.5-1.el6rhs.x86_64.rpm SHA-256: 224762dcc3a4e4520eda686d494bc22a7ff493740eb4acb2493a3269edaef1ea
gluster-nagios-addons-debuginfo-0.2.5-1.el6rhs.x86_64.rpm SHA-256: ac766085c6a39905108b8f62c9fa714db9b1f003419ca32eaf4aca7d39581fb2
gluster-nagios-common-0.2.2-1.el6rhs.noarch.rpm SHA-256: 9b3ddcc4994e50a6f07571ea948b9285b766b6e7ef86c4fc384fd74b7bf73429
glusterfs-3.7.1-16.el6rhs.x86_64.rpm SHA-256: 45a33812e8db59a96f5ab29dd112ced7b94caaa9223d3e2dc5ffab36abf0ba1e
glusterfs-api-3.7.1-16.el6rhs.x86_64.rpm SHA-256: 51bb9a0a42af4c7146a7cf04b9fd1a0971035366a4ca020a063720a91fbd0710
glusterfs-api-devel-3.7.1-16.el6rhs.x86_64.rpm SHA-256: 453a41bfdf91412138fac731499aecb7eda7f5e639e6160edaf9524ed43f54f2
glusterfs-cli-3.7.1-16.el6rhs.x86_64.rpm SHA-256: 82f3cad985be72773d8deeb2e9fb0038bc40a7a9f2308cd16e7bb450841bc195
glusterfs-client-xlators-3.7.1-16.el6rhs.x86_64.rpm SHA-256: 0fca46a78371a4d5313c65d378583eb5a2a733210b19d9a057aed4096fe64dcc
glusterfs-debuginfo-3.7.1-16.el6rhs.x86_64.rpm SHA-256: f79cd0da67727a39ec9969a9d5f6f90220a65435a78fada018128dd13eb3f692
glusterfs-devel-3.7.1-16.el6rhs.x86_64.rpm SHA-256: a8d20a9be53cbb9cf773b024b0b2380c0d3b9cf7cf05e6444172fca7937e905e
glusterfs-fuse-3.7.1-16.el6rhs.x86_64.rpm SHA-256: a219ce309bdffea170e22e7cab463144d856d2be0b162acf9e45f9ee310a4c8d
glusterfs-ganesha-3.7.1-16.el6rhs.x86_64.rpm SHA-256: 9a93ddb3e57f69b5d9dab41370e19ace73949d99d57c2486a9db0f29381795eb
glusterfs-geo-replication-3.7.1-16.el6rhs.x86_64.rpm SHA-256: b378d553c5c5f15dd77b1b61c5470b2e2e7fa192c89320cc46a01920d5426872
glusterfs-libs-3.7.1-16.el6rhs.x86_64.rpm SHA-256: 6dd19c0a93e73a489652a5afa3e0b3d0f3789a96ee91721feb0bccbdd267cd4b
glusterfs-rdma-3.7.1-16.el6rhs.x86_64.rpm SHA-256: cb5431c19cafa5525893b1cf4eda64cbf44099f231cef4fa27776a6ede8e3064
glusterfs-server-3.7.1-16.el6rhs.x86_64.rpm SHA-256: e2a0384556e121af4a6a90afc4fcf4d6e8ee86809084b70531debd172154482f
gstatus-0.65-1.el6rhs.x86_64.rpm SHA-256: 764f7ddeb4cbf14d9e498b71d7a1fb1ea0ed0a13068599df414e44455e4b9713
gstatus-debuginfo-0.65-1.el6rhs.x86_64.rpm SHA-256: 35426df15ed6ff5068bbafb5e460cae30686c4b928ae5bb1cc4553b120c1c6ae
openstack-swift-1.13.1-6.el6ost.noarch.rpm SHA-256: 1cc06e8859696c8fdcc9a2ed5f4468eb70bf093ed75fa466688fddc08ea22e70
openstack-swift-account-1.13.1-6.el6ost.noarch.rpm SHA-256: 4cae4da4a8b962dcf1602c34718b890e62363f11941c4c3c08339a02004b7ffe
openstack-swift-container-1.13.1-6.el6ost.noarch.rpm SHA-256: 12601d81bff7dc6c35ece23a5246f1c0f857d650476a24f50bd83690c26c1c5f
openstack-swift-doc-1.13.1-6.el6ost.noarch.rpm SHA-256: b712c4018726ec2b9a240d60ad1a7e10a6de7f741bd2104119db8e3f94c8fd54
openstack-swift-object-1.13.1-6.el6ost.noarch.rpm SHA-256: 5ce19b55b4c67407ffb9488efefc940fb249966c8f82845488a614e6d595f638
openstack-swift-proxy-1.13.1-6.el6ost.noarch.rpm SHA-256: ea34dbe4d03cfa810a4032328b2eea7cccd8c74b70bb17565a82cdc5cf54f5e6
python-gluster-3.7.1-16.el6rhs.x86_64.rpm SHA-256: f009a4bb8d7f3d8209c338a84892087c4b95b8d5b1be329079a7dc27fc6ddb4f
redhat-storage-server-3.1.1.0-2.el6rhs.noarch.rpm SHA-256: dd54b39624e13b9d269161eaae122f42b49e1308cbbd9b07cac3d4bc52f80a4e
swiftonfile-1.13.1-5.el6rhs.noarch.rpm SHA-256: 7521abd12da5e821805ede188d1900fee76a0992dd7d3e365b63c573ef471573
vdsm-4.16.20-1.3.el6rhs.x86_64.rpm SHA-256: 438cc533c68fd7872deb8cf0898474594b505a5ecb42564cd4b801267a487986
vdsm-cli-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: 9d83687709d81fd37e1fb96ec5c64f5349e20fd488769766c0f4cd58fe76f83f
vdsm-debug-plugin-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: 40280473642b28d4cd35f6376c728a95027b97d56222f46e6e0f20558286dcfa
vdsm-debuginfo-4.16.20-1.3.el6rhs.x86_64.rpm SHA-256: 6887a7542b58f2eecfcebd44de07f08fd3fec05963d2f7a46a74d75e9dec8de0
vdsm-gluster-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: 6d6aa4510f59d12aeed31e3b3eaded5fe429e4d8cdaf61f187376a4852804ae9
vdsm-hook-ethtool-options-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: c5b73e3a823739f2357b038dd82f178bc17ebeb3002cf0bc6b1c9f655e784736
vdsm-hook-faqemu-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: c6e6428055a46f673d82cc90910650141d8221dda6b130f559191fb49cc84880
vdsm-hook-openstacknet-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: 2d3294b92c61c2ea0abaac88d97a31ec734d7abc1be2fa9b5c64b571f34c342e
vdsm-hook-qemucmdline-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: 153a59f16c2a49bea03d347a02d2c1d26f1208a7d8d044ffcbedb66e994edfe8
vdsm-jsonrpc-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: 8d9ac445eec2cae4d9dc45c073a5428ba10f7bf13b58e65be498e59adc706caa
vdsm-python-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: f9af63a4aba0c86987a5d9c86112c75a6da77c87845db52bb7f132f676c22720
vdsm-python-zombiereaper-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: a94d8c0f5465391616accc417660f743b18d671c39c4bf0956ca4888716fe64c
vdsm-reg-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: ec3aa6d9bc1aab9c6cef0fe708552a6c8101867fbda960aae76077e687ab8587
vdsm-tests-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: 8018d060ad5f5c2d742f474f6bdf1e3680dcf345fb31d2ba142c7d3836e9662e
vdsm-xmlrpc-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: 1510a8770c471c82f27f5cb42b1310a4fdbff14cfc4a5e06d9fbb83107c32e76
vdsm-yajsonrpc-4.16.20-1.3.el6rhs.noarch.rpm SHA-256: 8283408036348138aea9d256862a73b4190b65fe2ad4787d68de928780fb201b

Red Hat Gluster Storage Nagios Server 3 for RHEL 6

SRPM
gluster-nagios-common-0.2.2-1.el6rhs.src.rpm SHA-256: f164c94ea205b3bd9674715b763d4b9bc22d377f42ec2d625177fc86b68983fd
nagios-server-addons-0.2.2-1.el6rhs.src.rpm SHA-256: 10cf6da87a84b71a8a12a99f506e2ac5efdda886f43872c0df624aae431ba6fe
x86_64
gluster-nagios-common-0.2.2-1.el6rhs.noarch.rpm SHA-256: 9b3ddcc4994e50a6f07571ea948b9285b766b6e7ef86c4fc384fd74b7bf73429
nagios-server-addons-0.2.2-1.el6rhs.noarch.rpm SHA-256: d400725eab348e19ee7922730da13717418f458614b06170ed7e963bef854a35

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.

Red Hat LinkedIn YouTube Facebook X, formerly Twitter

Quick Links

  • Downloads
  • Subscriptions
  • Support Cases
  • Customer Service
  • Product Documentation

Help

  • Contact Us
  • Customer Portal FAQ
  • Log-in Assistance

Site Info

  • Trust Red Hat
  • Browser Support Policy
  • Accessibility
  • Awards and Recognition
  • Colophon

Related Sites

  • redhat.com
  • developers.redhat.com
  • connect.redhat.com
  • cloud.redhat.com

Red Hat legal and privacy links

  • About Red Hat
  • Jobs
  • Events
  • Locations
  • Contact Red Hat
  • Red Hat Blog
  • Inclusion at Red Hat
  • Cool Stuff Store
  • Red Hat Summit
© 2025 Red Hat

Red Hat legal and privacy links

  • Privacy statement
  • Terms of use
  • All policies and guidelines
  • Digital accessibility