- Issued:
- 2016-06-23
- Updated:
- 2016-06-23
RHBA-2016:1240 - Bug Fix Advisory
Synopsis
Red Hat Gluster Storage 3.1 Update 3
Type/Severity
Bug Fix Advisory
Red Hat Insights patch analysis
Identify and remediate systems affected by this advisory.
Topic
Red Hat Gluster Storage 3.1 Update 3, which fixes several bugs,
and adds various enhancements, is now available for Red Hat
Enterprise Linux 6.
Description
Red Hat Gluster Storage is a software only scale-out storage solution that
provides flexible and affordable unstructured data storage. It unifies data
storage and infrastructure, increases performance, and improves
availability and manageability to meet enterprise-level storage challenges.
This update fixes numerous bugs and adds various enhancements. Space
precludes documenting all of these changes in this advisory. Users are
directed to the Red Hat Gluster Storage 3.1 Technical Notes, linked to in
the References section, for information on the most significant of these
changes.
All users of Red Hat Gluster Storage are advised to upgrade to these
updated packages.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat Enterprise Linux Server 6 x86_64
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 6 x86_64
- Red Hat Gluster Storage Nagios Server 3 for RHEL 6 x86_64
Fixes
- BZ - 1101702 - setting lower op-version should throw failure message
- BZ - 1113954 - glusterd logs are filled with "readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)"
- BZ - 1114045 - DHT :- log is full of ' Found anomalies in /<DIR> (gfid = 00000000-0000-0000-0000-000000000000)' - for each Directory which was self healed
- BZ - 1115367 - "rm -rf *" from multiple mount points fails to remove directories on all the subvolumes
- BZ - 1118762 - DHT : few Files are not accessible and not listed on mount + more than one Directory have same gfid + (sometimes) attributes has ?? in ls output after renaming Directories from multiple client at same time
- BZ - 1121186 - DHT : If Directory deletion is in progress and lookup from another mount heals that Directory on sub-volumes. then rmdir/rm -rf on parents fails with error 'Directory not empty'
- BZ - 1159263 - [USS]: Newly created directories doesnt have .snaps folder
- BZ - 1162648 - [USS]: There should be limit to the size of "snapshot-directory"
- BZ - 1231150 - After resetting diagnostics.client-log-level, still Debug messages are logging in scrubber log
- BZ - 1233213 - [New] - volume info --xml gives host UUID as zeros
- BZ - 1255639 - [libgfapi]: do an explicit lookup on the inodes linked in readdirp
- BZ - 1258875 - DHT: Once remove brick start failed in between Remove brick commit should not be allowed
- BZ - 1261838 - [geo-rep]: Multiple geo-rep session to the same slave is allowed for different users
- BZ - 1273539 - Remove dependency of glusterfs on rsyslog
- BZ - 1276219 - [GlusterD]: After log rotate of cmd_history.log file, the next executed gluster commands are not present in the cmd_history.log file.
- BZ - 1277414 - [Snapshot]: Snapshot restore stucks in post validation.
- BZ - 1277828 - RFE:nfs-ganesha:prompt the nfs-ganesha disable cli to let user provide "yes or no" option
- BZ - 1278332 - nfs-ganesha server do not enter grace period during failover/failback
- BZ - 1279628 - [GSS]-gluster v heal volname info does not work with enabled ssl/tls
- BZ - 1282747 - While file is self healing append to the file hangs
- BZ - 1283957 - Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume
- BZ - 1285196 - Dist-geo-rep : checkpoint doesn't reach even though all the files have been synced through hybrid crawl.
- BZ - 1285200 - Dist-geo-rep : geo-rep worker crashed while init with [Errno 34] Numerical result out of range.
- BZ - 1285203 - dist-geo-rep: status details incorrectly indicate few files as skipped when all the files are properly synced to slave
- BZ - 1286191 - dist-rep + quota : directory selfheal is not healing xattr 'trusted.glusterfs.quota.limit-set'; If you bring a replica pair down
- BZ - 1287951 - [GlusterD]Probing a node having standalone volume, should not happen
- BZ - 1289439 - snapd doesn't come up automatically after node reboot.
- BZ - 1290653 - [GlusterD]: GlusterD log is filled with error messages - " Failed to aggregate response from node/brick"
- BZ - 1291988 - [geo-rep]: ChangelogException: [Errno 22] Invalid argument observed upon rebooting the ACTIVE master node
- BZ - 1292034 - nfs-ganesha installation : no pacemaker package installed for RHEL 6.7
- BZ - 1293273 - [GlusterD]: Peer detach happening with a node which is hosting volume bricks
- BZ - 1294062 - [georep+disperse]: Geo-Rep session went to faulty with errors "[Errno 5] Input/output error"
- BZ - 1294612 - Self heal command gives error "Launching heal operation to perform index self heal on volume vol0 has been unsuccessful"
- BZ - 1294642 - quota: handle quota xattr removal when quota is enabled again
- BZ - 1294751 - Able to create files when quota limit is set to 0
- BZ - 1294790 - promotions and demotions not happening after attach tier due to fix layout taking very long time(3 days)
- BZ - 1296176 - geo-rep: hard-link rename issue on changelog replay
- BZ - 1298068 - GlusterD restart, starting the bricks when server quorum not met
- BZ - 1298162 - fuse mount crashed with mount point inaccessible and core found
- BZ - 1298955 - [GSS] - Setting of any option using volume set fails when the clients are 3.0.4 and server is 3.1.1
- BZ - 1299432 - Glusterd: Creation of volume is failing if one of the brick is down on the server
- BZ - 1299737 - values for Number of Scrubbed files, Number of Unsigned files, Last completed scrub time and Duration of last scrub are shown as zeros in bit rot scrub status
- BZ - 1300231 - 'gluster volume get' returns 0 value for server-quorum-ratio
- BZ - 1300679 - promotions not balanced across hot tier sub-volumes
- BZ - 1302355 - Over some time Files which were accessible become inaccessible(music files)
- BZ - 1302553 - heal info reporting slow when IO is in progress on the volume
- BZ - 1302688 - [HC] Implement fallocate, discard and zerofill with sharding
- BZ - 1303125 - After GlusterD restart, Remove-brick commit happening even though data migration not completed.
- BZ - 1303591 - AFR+SNAPSHOT: File with hard link have different inode number in USS
- BZ - 1303593 - [USS]: If .snaps already exists, ls -la lists it even after enabling USS
- BZ - 1304282 - [USS]: Need defined rules for snapshot-directory, setting to a/b works but in linux a/b is b is subdirectory of a
- BZ - 1304585 - quota: disabling and enabling quota in a quick interval removes quota's limit usage settings on multiple directories
- BZ - 1305456 - Errors seen in cli.log, while executing the command 'gluster snapshot info --xml'
- BZ - 1305735 - Improve error message for unsupported clients
- BZ - 1305836 - DHT: Take blocking locks while renaming files
- BZ - 1305849 - cd to .snaps fails with "transport endpoint not connected" after force start of the volume.
- BZ - 1306194 - NFS+attach tier:IOs hang while attach tier is issued
- BZ - 1306218 - quota: xattr trusted.glusterfs.quota.limit-objects not healed on a root of newly added brick
- BZ - 1306667 - Newly created volume start, starting the bricks when server quorum not met
- BZ - 1306907 - [New] - quarantine folder becomes empty and bitrot status does not list any files which are corrupted
- BZ - 1308837 - Peers goes to rejected state after reboot of one node when quota is enabled on cloned volume.
- BZ - 1311362 - [AFR]: "volume heal info" command is failing during in-service upgrade to latest.
- BZ - 1311839 - False positives in heal info
- BZ - 1313290 - [HC] glusterfs mount crashed
- BZ - 1313320 - features.sharding is not available in 'gluster volume set help'
- BZ - 1313352 - Dist-geo-rep: Support geo-replication to work with sharding
- BZ - 1313370 - No xml output on gluster volume heal info command with --xml
- BZ - 1314373 - Peer information is not propagated to all the nodes in the cluster, when the peer is probed with its second interface FQDN/IP
- BZ - 1314391 - glusterd crashed when probing a node with firewall enabled on only one node
- BZ - 1314421 - [HC] Ensure o-direct behaviour when sharding is enabled on volume and files opened with o_direct
- BZ - 1314724 - Multi-threaded SHD support
- BZ - 1315201 - [GSS] - smbd crashes on 3.1.1 with samba-vfs 4.1
- BZ - 1317790 - Cache swift xattrs
- BZ - 1317940 - smbd crashes while accessing multiple volume shares via same client
- BZ - 1318170 - marker: set inode ctx before lookup is unwind
- BZ - 1318427 - gfid-reset of a directory in distributed replicate volume doesn't set gfid on 2nd till last subvolumes
- BZ - 1318428 - ./tests/basic/tier/tier-file-create.t dumping core fairly often on build machines in Linux
- BZ - 1319406 - gluster volume heal info shows conservative merge entries as in split-brain
- BZ - 1319592 - DHT-rebalance: rebalance status shows failed when replica pair bricks are brought down in distrep volume while re-name of files going on
- BZ - 1319619 - RHGS-3.1 op-version need to be corrected
- BZ - 1319634 - Data Tiering:File create terminates with "Input/output error" as split brain is observed
- BZ - 1319638 - rpc: set bind-insecure to off by default
- BZ - 1319658 - setting enable-shared-storage without mentioning the domain, doesn't enables shared storage
- BZ - 1319670 - regression : RHGS 3.0 introduced a maximum value length in the info files
- BZ - 1319688 - Probing a new RHGS node, which is part of another cluster, should throw proper error message in logs and CLI
- BZ - 1319695 - Disabling enable-shared-storage deletes the volume with the name - "gluster_shared_storage"
- BZ - 1319698 - Creation of files on hot tier volume taking very long time
- BZ - 1319710 - glusterd: disable ping timer b/w glusterd and make epoll thread count default 1
- BZ - 1319996 - glusterfs-devel: 3.7.0-3.el6 client package fails to install on dependency
- BZ - 1319998 - while performing in-service software upgrade, gluster-client-xlators, glusterfs-ganesha, python-gluster package should not get installed when distributed volume up
- BZ - 1320000 - While performing in-service software update, glusterfs-geo-replication and glusterfs-cli packages are updated even when glusterfsd or distributed volume is up
- BZ - 1320390 - build: spec file conflict resolution
- BZ - 1320412 - disperse: Provide an option to enable/disable eager lock
- BZ - 1321509 - Critical error message seen in glusterd log file, after logrotate
- BZ - 1321550 - Do not succeed mkdir without gfid-req
- BZ - 1321556 - Continuous nfs_grace_monitor log messages observed in /var/log/messages
- BZ - 1322247 - SAMBA+TIER : File size is not getting updated when created on windows samba share mount
- BZ - 1322306 - [scale] Brick process does not start after node reboot
- BZ - 1322695 - TIER : Wrong message display.On detach tier success the message reflects Tier command failed.
- BZ - 1322765 - glusterd: glusted didn't come up after node reboot error" realpath () failed for brick /run/gluster/snaps/130949baac8843cda443cf8a6441157f/brick3/b3. The underlying file system may be in bad state [No such file or directory]"
- BZ - 1323042 - Inconsistent directory structure on dht subvols caused by parent layouts going stale during entry create operations because of fix-layout
- BZ - 1323119 - TIER : Attach tier fails
- BZ - 1323424 - Ganesha: Continuous "0-glfs_h_poll_cache_invalidation: invalid argument" messages getting logged in ganesha-gfapi logs.
- BZ - 1324338 - Too many log messages showing inode ctx is NULL for 00000000-0000-0000-0000-000000000000
- BZ - 1324604 - [Perf] : 14-53% regression in metadata performance with RHGS 3.1.3 on FUSE mounts
- BZ - 1324820 - /var/lib/glusterd/$few-directories not owned by any package, causing it to remain after glusterfs-server is uninstalled
- BZ - 1325750 - Volume stop is failing when one of brick is down due to underlying filesystem crash
- BZ - 1325760 - Worker dies with [Errno 5] Input/output error upon creation of entries at slave
- BZ - 1325975 - nfs-ganesha crashes with segfault error while doing refresh config on volume.
- BZ - 1326248 - [tiering]: during detach tier operation, Input/output error is seen with new file writes on NFS mount
- BZ - 1326498 - DHT: Provide mechanism to nuke a entire directory from a client (offloading the work to the bricks)
- BZ - 1326505 - fuse: fix inode and dentry leaks
- BZ - 1326663 - [DHT-Rebalance]: with few brick process down, rebalance process isn't killed even after stopping rebalance process
- BZ - 1327035 - fuse: Avoid redundant lookup on "." and ".." as part of every readdirp
- BZ - 1327036 - Use after free bug in notify_kernel_loop in fuse-bridge code
- BZ - 1327165 - snapshot-clone: clone volume doesn't start after node reboot
- BZ - 1327552 - [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume is configured with IP
- BZ - 1327751 - glusterd memory overcommit
- BZ - 1328194 - upgrading from RHGS 3.1.2 el7 client package to 3.1.3 throws warning
- BZ - 1328397 - [geo-rep]: schedule_georep.py doesn't touch the mount in every iteration
- BZ - 1328411 - SMB:while running I/O on cifs mount and doing graph switch causes cifs mount to hang.
- BZ - 1328721 - [Tiering]: promotion of files may not be balanced on distributed hot tier when promoting files with size as that of max.mb
- BZ - 1329118 - volume create fails with "Failed to store the Volume information" due to /var/lib/glusterd/vols missing with latest build
- BZ - 1329514 - rm -rf to a dir gives directory not empty(ENOTEMPTY) error
- BZ - 1329895 - eager-lock should be used as cluster.eager-lock in /var/lib/glusterd/group/virt file as there is a new option disperse.eager-lock
- BZ - 1330044 - one of vm goes to paused state when network goes down and comes up back
- BZ - 1330385 - glusterd restart is failing if volume brick is down due to underlying FS crash.
- BZ - 1330511 - build: redhat-storage-server for RHGS 3.1.3 - [RHEL 6.8]
- BZ - 1330881 - Inode leaks found in data-self-heal
- BZ - 1330901 - dht must avoid fresh lookups when a single replica pair goes offline
- BZ - 1331260 - Swift: The GET on object manifest with certain byte range fails to show the content of file.
- BZ - 1331280 - Some of VMs go to paused state when there is concurrent I/O on vms
- BZ - 1331376 - [geo-rep]: schedule_georep.py doesn't work when invoked using cron
- BZ - 1332077 - We need more debug info from stack wind and unwind calls
- BZ - 1332199 - Self Heal fails on a replica3 volume with 'disk quota exceeded'
- BZ - 1332269 - /var/lib/glusterd/groups/groups file doesn't gets updated when the file is edited or modified
- BZ - 1332949 - Heal info shows split-brain for .shard directory though only one brick was down
- BZ - 1332957 - [Tiering]: detach tier fails due to the error - 'removing tier fix layout xattr from /'
- BZ - 1333643 - Files present in the .shard folder even after deleting all the vms from the UI
- BZ - 1333668 - SAMBA-VSS : Permission denied issue while restoring the directory from windows client 1 when files are deleted from windows client 2
- BZ - 1334092 - [NFS-Ganesha] : stonith-enabled option not set with new versions of cman,pacemaker,corosync and pcs
- BZ - 1334234 - [Tiering]: Files remain in hot tier even after detach tier completes
- BZ - 1334668 - getting dependency error while upgrading RHGS client to build glusterfs-3.7.9-4.el7.x86_64.
- BZ - 1334985 - Under high read load, sometimes the message "XDR decoding failed" appears in the logs and read fails
- BZ - 1335082 - [Tiering]: Detach tier commit is allowed before rebalance is complete
- BZ - 1335114 - refresh-config failing with latest 2.3.1-6 nfs-ganesha build.
- BZ - 1335357 - Modified volume options are not syncing once glusterd comes up.
- BZ - 1335359 - Adding of identical brick (with diff IP/hostname) from peer node is failing.
- BZ - 1335364 - Fix excessive logging due to NULL dict in dht
- BZ - 1335367 - Failing to remove/replace the bad brick part of the volume.
- BZ - 1335437 - Self heal shows different information for the same volume from each node
- BZ - 1335505 - Brick logs spammed with dict_get errors
- BZ - 1335826 - failover is not working with latest builds.
- BZ - 1336295 - Replace brick causes vm to pause and /.shard is always present in the heal info
- BZ - 1336332 - glusterfs processes doesn't stop after invoking stop-all-gluster-processes.sh
- BZ - 1337384 - Brick processes not getting ports once glusterd comes up.
- BZ - 1337649 - log flooded with Could not map name=xxxx to a UUID when config'd with long hostnames
- BZ - 1339090 - During failback, nodes other than failed back node do not enter grace period
- BZ - 1339136 - Some of the VMs pause with read-only file system error even when volume-status reports all bricks are up
- BZ - 1339163 - [geo-rep]: Monitor crashed with [Errno 3] No such process
- BZ - 1339208 - Ganesha gets killed with segfault error while rebalance is in progress.
- BZ - 1340085 - Directory creation(mkdir) fails when the remove brick is initiated for replicated volumes accessing via nfs-ganesha
- BZ - 1340383 - [geo-rep]: If the session is renamed, geo-rep configuration are not retained
- BZ - 1341034 - [quota+snapshot]: Directories are inaccessible from activated snapshot, when the snapshot was created during directory creation
- BZ - 1341316 - [geo-rep]: Snapshot creation having geo-rep session is broken with latest build
- BZ - 1341567 - After setting up ganesha on RHEL 6, nodes remains in stopped state and grace related failures observed in pcs status
- BZ - 1341820 - [geo-rep]: Upgrade from 3.1.2 to 3.1.3 breaks the existing geo-rep session
- BZ - 1342252 - [geo-rep]: Remove brick with geo-rep session fails with latest build
- BZ - 1342261 - [georep]: Stopping volume fails if it has geo-rep session (Even in stopped state)
- BZ - 1342426 - self heal deamon killed due to oom kills on a dist-disperse volume using nfs ganesha
- BZ - 1342938 - [geo-rep]: Add-Brick use case: create push-pem force on existing geo-rep fails
- BZ - 1343549 - libglusterfs: Negate all but O_DIRECT flag if present on anon fds
- BZ - 1344278 - [disperse] mkdir after re balance give Input/Output Error
- BZ - 1344625 - fail delete volume operation if one of the glusterd instance is down in cluster
- BZ - 1347217 - Incorrect product version observed for RHEL 6 and 7 in product certificates
CVEs
(none)
Red Hat Enterprise Linux Server 6
SRPM | |
---|---|
glusterfs-3.7.9-10.el6.src.rpm | SHA-256: 9928854a6dd3e5b59a0a80a3ec0bd354f7002fb97fa0e2f51bc5558e0269c65b |
x86_64 | |
glusterfs-3.7.9-10.el6.x86_64.rpm | SHA-256: 8ce97cc691af0b1bfb947c2db31a7832520db00ff5f5a4ebc5d2ba97926e946d |
glusterfs-api-3.7.9-10.el6.x86_64.rpm | SHA-256: a378dcfb4c8c4f2b02ea9be6e0f7a9cbe755de9cf0123b4c73770b6ccbaa0a9e |
glusterfs-api-devel-3.7.9-10.el6.x86_64.rpm | SHA-256: 0df6c68b1288917c27e7207782830004344fde09ccfb2dd5daf96f1a125dfda8 |
glusterfs-cli-3.7.9-10.el6.x86_64.rpm | SHA-256: 58af33622428911eae561444ed31c76c8ecd8b9acd2d1dca803b4deb1f93ca5b |
glusterfs-client-xlators-3.7.9-10.el6.x86_64.rpm | SHA-256: ae3d813c13a97dfa42d95b31afc771e07a62af8b66dcf61930502c81b257a3ca |
glusterfs-debuginfo-3.7.9-10.el6.x86_64.rpm | SHA-256: a511f31fa3e710b9677e7dce76839eb4f1e3329041cda485835eb08cc7aa3077 |
glusterfs-devel-3.7.9-10.el6.x86_64.rpm | SHA-256: 46396beea811b55a434c4ff859854b40ea935d56f762754f6694da6c28f41e07 |
glusterfs-fuse-3.7.9-10.el6.x86_64.rpm | SHA-256: c2f9e176d6b5dcf97f368e83b7eec846636cc16d3886ef73007b0ef00faf3476 |
glusterfs-libs-3.7.9-10.el6.x86_64.rpm | SHA-256: 5076abf1c65621b9dae3e0cd2aeb04de1f945e2661bcf58e9e771831b12a8a97 |
glusterfs-rdma-3.7.9-10.el6.x86_64.rpm | SHA-256: 8004756d1a797d92f614159a81b8974250ad26cc8f3555dd0889f5f527c85728 |
python-gluster-3.7.9-10.el6.noarch.rpm | SHA-256: 977a2b0a1b7a365c2fe86db0f3ba89c0bf7e2f499e6ad36c7057628777e8e09c |
Red Hat Gluster Storage Server for On-premise 3 for RHEL 6
SRPM | |
---|---|
gluster-nagios-addons-0.2.7-1.el6rhs.src.rpm | SHA-256: 8fc121540b9009e5b16b3d3f18bab99f76b4c225cc1443e5dd4d950377dcd092 |
gluster-nagios-common-0.2.4-1.el6rhs.src.rpm | SHA-256: 6bce0e4357b1e369ca0ec91f7cde4fd24c25e4b865ec2d1dbe4941c395436d1a |
glusterfs-3.7.9-10.el6rhs.src.rpm | SHA-256: 7eeafa1c4512d105120d971134455bbdd1a7fa3166db1778a1337e2cdd3a161d |
redhat-storage-server-3.1.3.0-3.el6rhs.src.rpm | SHA-256: 7937778f08851cafe73a479fc3bdd17a256556ba6328117131a8a9f6efb998d4 |
sanlock-2.8-3.el6.src.rpm | SHA-256: 75411b8a7567b52237ba02e8b2a98579b0d901e5eb111bcd214205ce18073fb7 |
vdsm-4.16.30-1.5.el6rhs.src.rpm | SHA-256: 8b5a81db824eaaf431a6d19b65281064f6d81418ef2b59dbd2b1515b862dd550 |
x86_64 | |
fence-sanlock-2.8-3.el6.x86_64.rpm | SHA-256: 41780d1f38de4676d43ee3fd5b9aea7186206c78b199f42e0d64e3f8c0d40a70 |
gluster-nagios-addons-0.2.7-1.el6rhs.x86_64.rpm | SHA-256: 71b2fcb36fde8d3a105347b0950934cc7d1375565c8e86320fa77aa35a5656e2 |
gluster-nagios-addons-debuginfo-0.2.7-1.el6rhs.x86_64.rpm | SHA-256: 8f207050342e0df14804653447c66be498c7bbc63cd320697f0d5893dc90f475 |
gluster-nagios-common-0.2.4-1.el6rhs.noarch.rpm | SHA-256: b09188c6a874d584f5fad17545d1479c30cdb36509ed07a2db3cc9f50f069cbb |
glusterfs-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 305b0c0ddfad2654425320dab09e5d5ee9307829e77ee22f0cf2a95c6c6b6596 |
glusterfs-api-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 461df35b98af428868d4e2b937ca0f01cc3e7156fbb8a2ca3c22b2ed891f2e8a |
glusterfs-api-devel-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 5e4e63f353e040d2e7e53a3cd133d6ffd4d174ec100210d1b3cee5c6073601d1 |
glusterfs-cli-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 48878a33080ac6bf2ff0b0e8b38bb7286c217d10d0fbbf8f971ce6b372e99853 |
glusterfs-client-xlators-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 5e7c4f343dd1a6743238dbc99a5321f19b6d374992079e1e93c814ecf6b297d9 |
glusterfs-debuginfo-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 0c8878919ad52cc680417760fede1369b713ffda453c340507a44240f0d8e83d |
glusterfs-devel-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 6a4904c65e928fb65d44595f7e62234f00c2cdafa709dc4e5671459cd77a24d7 |
glusterfs-fuse-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 6424dd37f2d21fb64f5ca2b83f110175a9b35700247c2b259d665cc4bbd5db42 |
glusterfs-ganesha-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 03f4cb6cf95dde9ad43fd8b2432552655f8da34fa8a3b373478cda166bfe8f4d |
glusterfs-geo-replication-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 135d929092ed913c62e95c64d46143cf2674d8e2e04b3c979ce5149cf1bea33a |
glusterfs-libs-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: ba6cee5f5b3b54228d778f9086f482da4e394760255285b4b6ce16bb5adb7c76 |
glusterfs-rdma-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 80826f79c21f4244c12982717b44e8754a84ad92748c392f9130eccb03fed1e2 |
glusterfs-server-3.7.9-10.el6rhs.x86_64.rpm | SHA-256: 9596f41543a5d84256ea05ed0054121cdc2b29e64ef43583ee880a317ff80ec2 |
python-gluster-3.7.9-10.el6rhs.noarch.rpm | SHA-256: 99200c3a3f06042f9a6b36719dc1401acca9c7b3a4fa09ef4f06bc59bc3345e5 |
redhat-storage-server-3.1.3.0-3.el6rhs.noarch.rpm | SHA-256: 1be8adf0f800c7d7bfc3301bf2c682d985f24b54e665b5ccdd8cc529d19ddcf9 |
sanlock-2.8-3.el6.x86_64.rpm | SHA-256: acc707727d0fe2c571c6c68d2e8845072f8506c822eb38641848e1acea35a470 |
sanlock-debuginfo-2.8-3.el6.x86_64.rpm | SHA-256: 13f6863ef6ccdd6f66100039cf7cce92ac4662cabb027812ba571053f3ed29e5 |
sanlock-devel-2.8-3.el6.x86_64.rpm | SHA-256: 6c048c6979ff24468871b38f8315492fe4183f7426f4b7dfe3b620cc171891c9 |
sanlock-lib-2.8-3.el6.x86_64.rpm | SHA-256: cb77ea3f0f9b656db00bff30863b5d55914059b586de0b7342ffa78779f8e4d7 |
sanlock-python-2.8-3.el6.x86_64.rpm | SHA-256: 86fe9b0f1f2c356878bdd2f8ae556bf0a5936603103fd737255ab4b313470147 |
vdsm-4.16.30-1.5.el6rhs.x86_64.rpm | SHA-256: 1203a6326ffd12e553295238de23319bc6d8a8744cafc2aa87f99518750470f5 |
vdsm-cli-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 347c968a53912dbe93a3854b881413d65a35fcec1539516712d6800e86b56f71 |
vdsm-debug-plugin-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 1c919062f708460e7d237764401056f6edec6e3ac9255c5162f7437e170bc8fd |
vdsm-debuginfo-4.16.30-1.5.el6rhs.x86_64.rpm | SHA-256: 08ced734adba165c5fa8e41e5d79d98acc7a6f9e4506b2ebc7c14f4ac3e838fb |
vdsm-gluster-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 205379ab1241bafff17feec151fd3e562138ee5d302c8c5bf95c6e4f27fc3031 |
vdsm-hook-ethtool-options-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 46e5157ccc436bfa6a6af9ff09ef7847b942749e01f02bd8588778663e1a8f78 |
vdsm-hook-faqemu-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: e21be55862a9fb1e866253b6720f860bca31536261340f52ba1294d9e3fc6045 |
vdsm-hook-openstacknet-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 66802a251b684b05ec2457d59b248ef16afcf47fb3c2af364ec937ed9c002cc6 |
vdsm-hook-qemucmdline-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 6cddd40113908ddf68d7a21b7c4279d86efba9e463840bcc29775fce3e0fb749 |
vdsm-jsonrpc-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: a835a56dcca2e8cc0b8d5983324123772ab35be4af602b6a167ef81b92634ceb |
vdsm-python-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: b37448fdf53556e19ffe325a412472c7a686da5cf603f2a042d1eecfee894b22 |
vdsm-python-zombiereaper-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 668985221fd09484e0122b0a30ec470c68c76e7a4ea89f6ae9e235be683e0c72 |
vdsm-reg-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 4e7ea473f093d23a00a9db1df64c52005f72b24047fb0a40638b7984157d7532 |
vdsm-tests-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 992143157f879f1192f587a4ede694e28a129a17d577fa8badd87cebeb38cbf0 |
vdsm-xmlrpc-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 764ccbd06edcbe3998b231fd727cf23ad0f31fb0ca0c4201e2821dde3f3b2cf5 |
vdsm-yajsonrpc-4.16.30-1.5.el6rhs.noarch.rpm | SHA-256: 80fb1b5b59cda0f057a34c5bbde56fdf209bffa4ddaf62755071ef6867ff60aa |
Red Hat Gluster Storage Nagios Server 3 for RHEL 6
SRPM | |
---|---|
gluster-nagios-common-0.2.4-1.el6rhs.src.rpm | SHA-256: 6bce0e4357b1e369ca0ec91f7cde4fd24c25e4b865ec2d1dbe4941c395436d1a |
nagios-server-addons-0.2.5-1.el6rhs.src.rpm | SHA-256: 1d363b40d84896f464a4590f1ecf205ff9dc23371931bc120564206fcc93031e |
x86_64 | |
gluster-nagios-common-0.2.4-1.el6rhs.noarch.rpm | SHA-256: b09188c6a874d584f5fad17545d1479c30cdb36509ed07a2db3cc9f50f069cbb |
nagios-server-addons-0.2.5-1.el6rhs.noarch.rpm | SHA-256: 84183bb7d1588a26dcdfc5df626893c295102b949ea6cd6d46469688552ba09a |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.