- Issued:
- 2015-07-29
- Updated:
- 2015-07-29
RHSA-2015:1495 - Security Advisory
Synopsis
Important: Red Hat Gluster Storage 3.1 update
Type/Severity
Security Advisory: Important
Red Hat Insights patch analysis
Identify and remediate systems affected by this advisory.
Topic
Red Hat Gluster Storage 3.1, which fixes multiple security issues, several bugs,
and adds various enhancements, is now available.
Red Hat Product Security has rated this update as having Important security
impact. Common Vulnerability Scoring System (CVSS) base scores, which give
detailed severity ratings, are available for each vulnerability from the
CVE links in the References section.
Description
Red Hat Gluster Storage is a software only scale-out storage solution that
provides flexible and affordable unstructured data storage. It unifies data
storage and infrastructure, increases performance, and improves availability and
manageability to meet enterprise-level storage challenges.
Red Hat Gluster Storage's Unified File and Object Storage is built on
OpenStack's Object Storage (swift).
A flaw was found in the metadata constraints in OpenStack Object Storage
(swift). By adding metadata in several separate calls, a malicious user
could bypass the max_meta_count constraint, and store more metadata than
allowed by the configuration. (CVE-2014-7960)
Multiple flaws were found in check-mk, a plug-in for the Nagios monitoring
system, which is used to provide monitoring and alerts for the Red Hat
Gluster Storage network and infrastructure: a reflected cross-site
scripting flaw due improper output encoding, a flaw that could allow
attackers to write .mk files in arbitrary file system locations, and a flaw
that could possibly allow remote attackers to execute code in the wato (web
based admin) module due to the unsafe use of the pickle() function.
(CVE-2014-5338, CVE-2014-5339, CVE-2014-5340)
This update also fixes numerous bugs and adds various enhancements. Space
precludes documenting all of these changes in this advisory. Users are
directed to the Red Hat Gluster Storage 3.1 Technical Notes, linked to in
the References section, for information on the most significant of these
changes.
This advisory introduces the following new features:
- NFS-Ganesha is now supported in highly available active-active
environment. In a highly available active-active environment, if a
NFS-Ganesha server that is connected to a NFS client running a particular
application crashes, the application/NFS client is seamlessly connected to
another NFS-Ganesha server without any administrative intervention.
- Snapshot scheduler creates snapshots automatically based on the
configured scheduled interval of time. The snapshots can be created every
hour, a particular day of the month, particular month, or a particular day
of the week.
- You can now create a clone of a snapshot. This is a writable clone and
behaves like a regular volume. A new volume can be created from a
particular snapshot clone. Snapshot Clone is a technology preview feature.
- Red Hat Gluster Storage supports network encryption using TLS/SSL.
Red Hat Gluster Storage uses TLS/SSL for authentication and authorization,
in place of the home grown authentication framework used for normal connections.
- BitRot detection is a technique used in Red Hat Gluster Storage to
identify the silent corruption of data with no indication from the disk to
the storage software layer when the error has occurred. BitRot also helps
in catching backend tinkering of bricks, where the data is directly
manipulated on the bricks without going through FUSE, NFS or any other
access protocols.
- Glusterfind is a utility that provides the list of files that are
modified between the previous backup session and the current period.
This list of files can then be used by any industry standard backup
application for backup.
- The Parallel Network File System (pNFS) is part of the NFS v4.1 protocol
that allows compute clients to access storage devices directly and in
parallel. pNFS is a technology preview feature.
- Tiering improves the performance, and the compliance aspects in a Red Hat
Gluster Storage environment. It serves as an enabling technology for other
enhancements by combining cost-effective or archivally oriented storage for
the majority of user data with high-performance storage to absorb the
majority of I/O workload. Tiering is a technology preview feature.
All users of Red Hat Gluster Storage are advised to apply this update.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat Enterprise Linux Server 6 x86_64
- Red Hat Enterprise Linux Server 5 x86_64
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 6 x86_64
- Red Hat Gluster Storage Nagios Server 3 for RHEL 6 x86_64
Fixes
- BZ - 826836 - Error reading info on glusterfsd service while removing 'Gluster File System' group via yum
- BZ - 871727 - [RHEV-RHS] Bringing down one storage node in a pure replicate volume (1x2) moved one of the VM to paused state.
- BZ - 874745 - [SELinux] [RFE] [RHGS] Red Hat Storage daemons need SELinux confinement
- BZ - 980043 - quota: regex in logging message
- BZ - 987980 - Dist-geo-rep : after remove brick commit from the machine having multiple bricks, the change_detector becomes xsync.
- BZ - 990029 - [RFE] enable gfid to path conversion
- BZ - 1002991 - Dist-geo-rep: errors in log related to syncdutils.py and monitor.py (status is Stable though)
- BZ - 1006840 - Dist-geo-rep : After data got synced, on slave volume; few directories(owner is non privileged User) have different permission then master volume
- BZ - 1008826 - [RFE] Dist-geo-rep : remove-brick commit(for brick(s) on master volume) should kill geo-rep worker process for the bricks getting removed.
- BZ - 1009351 - [RFE] Dist-geo-rep : no need of restarting other geo replication instances when they receives 'ECONNABORTED' on remove-brick commit of some other brick
- BZ - 1010327 - Dist-geo-rep : session status is defunct after syncdutils.py errors in log
- BZ - 1021820 - quota: quotad.socket in /tmp
- BZ - 1023416 - quota: limit set cli issues with setting in Bytes(B) or without providing the type(size)
- BZ - 1026831 - Dist-geo-rep : In the newly added node, the gsyncd uses xsync as change_detector instead of changelog,
- BZ - 1027142 - Dist-geo-rep : After remove brick commit it should stop the gsyncd running on the removed node
- BZ - 1027693 - Quota: features.quota-deem-statfs is "on" even after disabling quota.
- BZ - 1027710 - [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled.
- BZ - 1028965 - Dist-geo-rep : geo-rep config shows ignore_deletes as true always, even though its not true.
- BZ - 1029104 - dist-geo-rep: For a node with has passive status, "crawl status" is listed as "Hybrid Crawl"
- BZ - 1031515 - Dist-geo-rep : too much logging in slave gluster logs when there are some 20 million files for xsync to crawl
- BZ - 1032445 - Dist-geo-rep : When an active brick goes down and comes back, the gsyncd associated with it starts using xsync as change_detector
- BZ - 1039008 - Dist-geo-rep : After checkpoint set, status detail doesn't show updated checkpoint info until second execution.
- BZ - 1039674 - quota: ENOTCONN parodically seen in logs when setting hard/soft timeout during I/O.
- BZ - 1044344 - Assertion failed:uuid null while running getfattr on a file in a directory which has quota limit set
- BZ - 1047481 - DHT: Setfattr doesn't take rebalance into consideration
- BZ - 1048122 - [SNAPSHOT] : gluster snapshot delete doesnt provide option to delete all / multiple snaps of a given volume
- BZ - 1054154 - dist-geo-rep : gsyncd crashed in syncdutils.py while removing a file.
- BZ - 1059255 - dist-geo-rep : checkpoint doesn't reach because checkpoint became stale.
- BZ - 1062401 - RFE: move code for directory tree setup on hcfs to standalone script
- BZ - 1063215 - gluster cli crashed upon running 'heal info' command with the binaries compiled with -DDEBUG
- BZ - 1082659 - glusterfs-api package should pull glusterfs package as dependency
- BZ - 1083024 - [SNAPSHOT]: Setting config snap-max-hard-limit values require correction in output in different scenarios
- BZ - 1085202 - [SNAPSHOT]: While rebalance is in progress as part of remove-brick the snapshot creation fails with prevalidation
- BZ - 1093838 - Brick-sorted order of filenames in RHS directory harms Hadoop mapreduce performance
- BZ - 1098093 - [SNAPSHOT]: setting the -ve values in snapshot config should result in proper message
- BZ - 1098200 - [SNAPSHOT]: Stale options (Snap volume) needs to be removed from volume info
- BZ - 1101270 - quota a little bit lower than max LONG fails
- BZ - 1101697 - [barrier] Spelling correction in glusterd log message while enabling/disabling barrier
- BZ - 1102047 - [RFE] Need gluster cli command to retrieve current op-version on the RHS Node
- BZ - 1103971 - quota: setting limit to 16384PB shows wrong stat with list commands
- BZ - 1104478 - [SNAPSHOT] Create snaphost failed with error "unbarrier brick opfailed with the error quorum is not met"
- BZ - 1109111 - While doing yum update observed error reading information on service glusterfsd: No such file or directory
- BZ - 1109689 - [SNAPSHOT]: once we reach the soft-limit and auto-delete is set to disable than we warn user which is not logged into the logs
- BZ - 1110715 - "error reading information on service glusterfsd: No such file or directory" in install.log
- BZ - 1113424 - Dist-geo-rep : geo-rep throws wrong error messages when incorrect commands are executed.
- BZ - 1114015 - [SNAPSHOT]: setting config valuses doesn't delete the already created snapshots,but wrongly warns the user that it might delete
- BZ - 1114976 - nfs-ganesha: logs inside the /tmp directory
- BZ - 1116084 - Quota: Null client error messages are repeatedly written to quotad.log.
- BZ - 1117172 - DHT : - rename of files failed with 'No such File or Directory' when Source file was already present and all sub-volumes were up
- BZ - 1117270 - [SNAPSHOT]: error message for invalid snapshot status should be aligned with error messages of info and list
- BZ - 1120907 - [RFE] Add confirmation dialog to to snapshot restore operation
- BZ - 1121560 - [SNAPSHOT]: Output message when a snapshot create is issued when multiple bricks are down needs to be improved
- BZ - 1122064 - [SNAPSHOT]: activate and deactivate doesn't do a handshake when a glusterd comes back
- BZ - 1127401 - [EARLY ACCESS] ignore-deletes option is not something you can configure
- BZ - 1130998 - [SNAPSHOT]: "man gluster" needs modification for few snapshot commands
- BZ - 1131044 - DHT : - renaming same file from multiple mount failed with - 'Structure needs cleaning' error on all mount
- BZ - 1131418 - remove-brick: logs display the error related to "Operation not permitted"
- BZ - 1131968 - [SNAPSHOT]: snapshoted volume is read only but it shows rw attributes in mount
- BZ - 1132026 - [SNAPSHOT]: nouuid is appended for every snapshoted brick which causes duplication if the original brick has already nouuid
- BZ - 1132337 - CVE-2014-5338 CVE-2014-5339 CVE-2014-5340 check-mk: multiple flaws fixed in versions 1.2.4p4 and 1.2.5i4
- BZ - 1134690 - [SNAPSHOT]: glusterd crash while snaphshot creation was in progress
- BZ - 1139106 - [RFE] geo-rep mount broker setup has to be simplified.
- BZ - 1140183 - dist-geo-rep: Concurrent renames and node reboots results in slave having both source and destination of file with destination being 0 byte sticky file
- BZ - 1140506 - [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done
- BZ - 1141433 - [SNAPSHOT]: output correction in setting snap-max-hard/soft-limit for system/volume
- BZ - 1144088 - dist-gep-rep: Files on master and slave are not in sync after file renames on master volume.
- BZ - 1147627 - dist-geo-rep: Few symlinks not synced to slave after an Active node got rebooted
- BZ - 1150461 - CVE-2014-7960 openstack-swift: Swift metadata constraints are not correctly enforced
- BZ - 1150899 - FEATURE REQUEST: Add "disperse" feature from GlusterFS 3.6
- BZ - 1156637 - Gluster small-file creates do not scale with brick count
- BZ - 1160790 - RFE: bandwidth throttling of geo-replication
- BZ - 1165663 - [USS]: Inconsistent behaviour when a snapshot is default deactivated and when it is activated and than deactivated
- BZ - 1171662 - libgfapi crashes in glfs_fini for RDMA type volumes
- BZ - 1176835 - [USS] : statfs call fails on USS.
- BZ - 1177911 - [USS]:Giving the wrong input while setting USS fails as expected but gluster v info shows the wrong value set in features.uss
- BZ - 1178130 - quota: quota list displays double the size of previous value, post heal completion.
- BZ - 1179701 - dist-geo-rep: Geo-rep skipped some files after replacing a node with the same hostname and IP
- BZ - 1181108 - [RFE] While creating a snapshot the timestamp has to be appended to the snapshot name.
- BZ - 1183988 - DHT:Quota:- brick process crashed after deleting .glusterfs from backend
- BZ - 1186328 - [SNAPSHOT]: Refactoring snapshot functions from glusterd-utils.c
- BZ - 1195659 - rhs-hadoop package is missing dependencies
- BZ - 1198021 - [SNAPSHOT]: Schedule snapshot creation with frequency ofhalf-hourly ,hourly,daily,weekly,monthly and yearly
- BZ - 1201712 - [georep]: Transition from xsync to changelog doesn't happen once the brick is brought online
- BZ - 1201732 - [dist-geo-rep]:Directory not empty and Stale file handle errors in geo-rep logs during deletes from master in history/changelog crawl
- BZ - 1202388 - [SNAPSHOT]: After a volume which has quota enabled is restored to a snap, attaching another node to the cluster is not successful
- BZ - 1203901 - NFS: IOZone tests hang, disconnects and hung tasks seen in logs.
- BZ - 1204044 - [geo-rep] stop-all-gluster-processes.sh fails to stop all gluster processes
- BZ - 1208420 - [SELinux] [SMB]: smb service fails to start with SELINUX enabled on RHEL6.6 and RHS 3.0.4 samba rpms
- BZ - 1209132 - RHEL7:Need samba build for RHEL7
- BZ - 1211839 - While performing in-service software update, glusterfs-geo-replication and glusterfs-cli packages are updated even when glusterfsd or distributed volume is up
- BZ - 1212576 - Inappropriate error message generated when non-resolvable hostname is given for peer in 'gluster volume create' command for distribute-replicate volume creation
- BZ - 1212701 - Remove replace-brick with data migration support from gluster cli
- BZ - 1213245 - Volume creation fails with error "host is not in 'Peer in Cluster' state"
- BZ - 1213325 - SMB:Clustering entries not removed from smb.conf even after stopping the ctdb volume when selinux running in permissive mode
- BZ - 1214211 - NFS logs are filled with system.posix_acl_access messages
- BZ - 1214253 - [SELinux] [glusterfsd] SELinux is preventing /usr/sbin/glusterfsd from write access on the sock_file /var/run/glusterd.socket
- BZ - 1214258 - [SELinux] [glusterfsd] SELinux is preventing /usr/sbin/glusterfsd from unlink access on the sock_file /var/run/glusterd.socket
- BZ - 1214616 - nfs-ganesha: iozone write test is causing nfs server crash
- BZ - 1215430 - erasure coded volumes can't read large directory trees
- BZ - 1215635 - [SELinux] [ctdb] SELinux is preventing /bin/bash from execute access on the file /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
- BZ - 1215637 - [SELinux] [RHGS-3.1] AVC's of all the executable hooks under /var/lib/glusterd/hooks/ on RHEL-6.7
- BZ - 1215640 - [SELinux] [smb] SELinux is preventing /usr/sbin/smbd from execute_no_trans access on the file /usr/sbin/smbd
- BZ - 1215885 - [SELinux] SMB: WIth selinux in enforcing mode the mount to a gluster volume on cifs fails with i/o error.
- BZ - 1216941 - [SELinux] RHEL7:SMB: ctdbd does not have write permissions on fuse mount when SELinux is enabled
- BZ - 1217852 - enable all HDP/GlusterFS stacks
- BZ - 1218902 - [SELinux] [SMB]: RHEL7.1- SELinux policy for all AVC's on Samba and CTDB
- BZ - 1219793 - Dependency problem due to glusterfs-api depending on glusterfs instead of only glusterfs-libs [rhel-6]
- BZ - 1220999 - [SELinux] [nfs-ganesha]: Volume export fails when SELinux is in Enforcing mode - RHEL-6.7
- BZ - 1221344 - Change hive/warehouse perms from 0755 to 0775
- BZ - 1221585 - [RFE] Red Hat Gluster Storage server support on RHEL 7 platform.
- BZ - 1221612 - Minor tweaks for Samba spec to fix build issues found in QA
- BZ - 1221743 - glusterd not starting after a fresh install of 3.7.0-1.el6rhs build due to missing library files
- BZ - 1222442 - I/O's hanging on tiered volumes (NFS)
- BZ - 1222776 - [geo-rep]: With tarssh the file is created at slave but it doesnt get sync
- BZ - 1222785 - [Virt-RHGS] Creating a image on gluster volume using qemu-img + gfapi throws error messages related to rpc_transport
- BZ - 1222856 - [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
- BZ - 1223201 - Simplify creation and set-up of meta-volume (shared storage)
- BZ - 1223205 - [Snapshot] Scheduled job is not processed when one of the node of shared storage volume is down
- BZ - 1223206 - "Snap_scheduler disable" should have different return codes for different failures.
- BZ - 1223209 - [Snapshot] Do not run scheduler if ovirt scheduler is running
- BZ - 1223225 - cli correction: if tried to create multiple bricks on same server shows replicate volume instead of disperse volume
- BZ - 1223238 - Update of glusterfs native client rpms in RHEL 7 rh-common channel for RHGS 3.1
- BZ - 1223299 - Data Tiering:Frequency counters not working
- BZ - 1223677 - [RHEV-RHGS] After self-heal operation, VM Image file loses the sparseness property
- BZ - 1223695 - [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
- BZ - 1223715 - Though brick demon is not running, gluster vol status command shows the pid
- BZ - 1223738 - Allow only lookup and delete operation on file that is in split-brain
- BZ - 1223906 - Downstream bz for vdsm dist-git
- BZ - 1224033 - [Backup]: Crash observed when glusterfind pre is run on a dist-rep volume
- BZ - 1224043 - [Backup]: Incorrect error message displayed when glusterfind post is run with invalid volume name
- BZ - 1224046 - [Backup]: Misleading error message when glusterfind delete is given with non-existent volume
- BZ - 1224065 - gluster nfs-ganesha enable command failed
- BZ - 1224068 - [Backup]: Packages to be installed for glusterfind api to work
- BZ - 1224076 - [Backup]: Glusterfind not working with change-detector as 'changelog'
- BZ - 1224077 - Directories are missing on the mount point after attaching tier to distribute replicate volume.
- BZ - 1224081 - Detaching tier start failed on dist-rep volume
- BZ - 1224086 - Detach tier commit failed on a dist-rep volume
- BZ - 1224109 - [Backup]: Unable to create a glusterfind session
- BZ - 1224126 - NFS logs are filled with system.posix_acl_access messages
- BZ - 1224159 - data tiering:detach-tier start command fails with "Commit failed on localhost"
- BZ - 1224164 - data tiering: detach tier status not working
- BZ - 1224165 - SIGNING FAILURE Error messages are poping up in the bitd log
- BZ - 1224175 - Glusterd fails to start after volume restore, tier attach and node reboot
- BZ - 1224183 - quota: glusterfsd crash once quota limit-usage is executed
- BZ - 1224215 - nfs-ganesha: rmdir logs "remote operation failed: Stale file handle" even though the operation is successful
- BZ - 1224218 - BitRot :- bitd is not signing Objects if more than 3 bricks are present on same node
- BZ - 1224229 - BitRot :- If peer in cluster doesn't have brick then its should not start bitd on that node and should not create partial volume file
- BZ - 1224232 - BitRot :- In case of NFS mount, Object Versioning and file signing is not working as expected
- BZ - 1224236 - [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's start/stop state
- BZ - 1224239 - [Data Tiering] : Attaching a replica 2 hot tier to a replica 3 volume changes the volume topology to nx2 - causing inconsistent data between bricks in the replica set
- BZ - 1224240 - BitRot :- scrub pause/resume should give proper error message if scrubber is already paused/resumed and Admin tries to perform same operation
- BZ - 1224246 - [SNAPSHOT] : Appending time stamp to snap name while using scheduler to create snapshots should be removed.
- BZ - 1224609 - /etc/redhat-storage-release needs update to provide identity for RHGS Server 3.1.0
- BZ - 1224610 - nfs-ganesha: execution of script ganesha-ha.sh throws a error for a file
- BZ - 1224615 - NFS-Ganesha : Building downstream NFS-Ganesha rpms for 3.1
- BZ - 1224618 - Ganesha server became unresponsive after successfull failover
- BZ - 1224619 - nfs-ganesha:delete node throws error and pcs status also notifies about failures, in fact I/O also doesn't resume post grace period
- BZ - 1224629 - RHEL7: Samba build for rhel7 fails to install with dependency errors.
- BZ - 1224639 - 'glusterd.socket' file created by rpm scriptlet is not cleaned-up properly post installation
- BZ - 1224658 - RHEL7: CTDB build needed for RHEl7 with dependencies resolved
- BZ - 1224662 - [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
- BZ - 1225338 - [geo-rep]: snapshot creation timesout even if geo-replication is in pause/stop/delete state
- BZ - 1225371 - peers connected in the middle of a transaction are participating in the transaction
- BZ - 1225417 - Disks not visible in Storage devices tab on clicking Sync option
- BZ - 1225507 - nfs-ganesha: Getting issues for nfs-ganesha on new nodes of glusterfs,error is /etc/ganesha/ganesha-ha.conf: line 11: VIP_<hostname with fqdn>=<ip>: command not found
- BZ - 1226132 - [RFE] Provide hourly scrubbing option
- BZ - 1226167 - tiering: use sperate log/socket/pid file for tiering
- BZ - 1226168 - Do not allow detach-tier commands on a non tiered volume
- BZ - 1226820 - Brick process crashed during self-heal process
- BZ - 1226844 - NFS-Ganesha: ACL should not be enabled by default
- BZ - 1226863 - nfs-ganesha: volume is not in list of exports in case of volume stop followed by volume start
- BZ - 1226889 - [Backup]: 'New' as well as 'Modify' entry getting recorded for a newly created hardlink
- BZ - 1226898 - [SELinux] redhat-storage-server should stop disabling SELinux
- BZ - 1227029 - glusterfs-devel: 3.7.0-3.el6 client package fails to install on dependency
- BZ - 1227179 - GlusterD fills the logs when the NFS-server is disabled
- BZ - 1227187 - The tiering feature requires counters.
- BZ - 1227197 - Disperse volume : Memory leak in client glusterfs
- BZ - 1227241 - cli/tiering:typo errors in tiering
- BZ - 1227311 - nfs-ganesha: 8 node pcs cluster setup fails
- BZ - 1227317 - Updating rfc.sh to point to the downstream branch.
- BZ - 1227326 - [SELinux] [BVT]: SELinux throws AVC errors while running DHT automation on RHEL-7.1
- BZ - 1227469 - should not spawn another migration daemon on graph switch
- BZ - 1227618 - [geo-rep]: use_meta_volume config option should be validated for its values
- BZ - 1227649 - linux untar hanged after the bricks are up in a 8+4 config
- BZ - 1227691 - [Backup]: Rename is getting recorded as a MODIFY entry in output file
- BZ - 1227704 - [Backup]: Glusterfind create should display a msg if the session is successfully created
- BZ - 1227709 - Not able to export volume using nfs-ganesha
- BZ - 1227869 - [Quota] The root of the volume on which the quota is set shows the volume size more than actual volume size, when checked with "df" command.
- BZ - 1228017 - [Backup]: Crash observed when glusterfind pre is run after deleting a directory containing files
- BZ - 1228127 - Volume needs restart after editing auth.ssl-allow list for volume options which otherwise has to be automatic
- BZ - 1228150 - nfs-ganesha: Upcall infrastructure support
- BZ - 1228152 - [RFE] nfs-ganesha: pNFS for RHGS 3.1
- BZ - 1228153 - nfs-ganesha: Fix gfapi.log location
- BZ - 1228155 - [RFE]nfs-ganesha: ACL feature support
- BZ - 1228164 - [Snapshot] Python crashes with trace back notification when shared storage is unmount from Storage Node
- BZ - 1228173 - [geo-rep]: RENAME are not synced to slave when quota is enabled.
- BZ - 1228222 - Disable pNFS by default for nfs 4.1 mount
- BZ - 1228225 - nfs-ganesha : Performance improvement for pNFS
- BZ - 1228246 - Data tiering:UI: volume status of a tier volume shows all bricks as hot bricks
- BZ - 1228247 - [Backup]: File movement across directories does not get captured in the output file in a X3 volume
- BZ - 1228294 - Disperse volume : Geo-replication failed
- BZ - 1228315 - [RFE] Provide nfs-ganesha for RHGS 3.1 on RHEL7
- BZ - 1228495 - [Backup]: Glusterfind pre fails with htime xattr updation error resulting in historical changelogs not available
- BZ - 1228496 - Disperse volume : glusterfsd crashed
- BZ - 1228525 - Disperse volume : 'ls -ltrh' doesn't list correct size of the files every time
- BZ - 1228529 - Disperse volume : glusterfs crashed
- BZ - 1228597 - [Backup]: Chown/chgrp for a directory does not get recorded as a MODIFY entry in the outfile
- BZ - 1228598 - [Backup]: Glusterfind session(s) created before starting the volume results in 'changelog not available' error, eventually
- BZ - 1228626 - nfs-ganesha: add node fails to add a new node to the cluster
- BZ - 1228674 - Need to upgrade CTDB to version 2.5.5
- BZ - 1229202 - VDSM service is not running without mom in RHEL-7
- BZ - 1229242 - data tiering:force Remove brick is detaching-tier
- BZ - 1229245 - Data Tiering:Replica type volume not getting converted to tier type after attaching tier
- BZ - 1229248 - Data Tiering:UI:changes required to CLI responses for attach and detach tier
- BZ - 1229251 - Data Tiering; Need to change volume info details like type of volume and number of bricks when tier is attached to a EC(disperse) volume
- BZ - 1229256 - Incorrect and unclear "vol info" o/p for tiered volume
- BZ - 1229257 - Incorrect vol info post detach on disperse volume
- BZ - 1229260 - Data Tiering: add tiering set options to volume set help (cluster.tier-demote-frequency and cluster.tier-promote-frequency)
- BZ - 1229261 - data tiering: do not allow tiering related volume set options on a regular volume
- BZ - 1229263 - Data Tiering:do not allow detach-tier when the volume is in "stopped" status
- BZ - 1229266 - [Tiering] : Attaching another node to the cluster which has a tiered volume times out
- BZ - 1229268 - Files migrated should stay on a tier for a full cycle
- BZ - 1229274 - tiering:glusterd crashed when trying to detach-tier commit force on a non-tiered volume.
- BZ - 1229567 - context of access control translator should be updated properly for GF_POSIX_ACL_*_KEY xattrs
- BZ - 1229569 - FSAL_GLUSTER : inherit ACLs is not working properly for group write permissions
- BZ - 1229607 - nfs-ganesha: unexporting a volume fails and nfs-ganesha process coredumps
- BZ - 1229623 - [Backup]: Glusterfind delete does not delete the session related information present in $GLUSTERD_WORKDIR
- BZ - 1229664 - [Backup]: Glusterfind create/pre/post/delete prompts for password of the peer node
- BZ - 1229667 - nfs-ganesha: gluster nfs-ganesha disable Error : Request timed out
- BZ - 1229674 - [Backup]: 'Glusterfind list' should display an appropriate output when there are no active sessions
- BZ - 1230101 - [glusterd] glusterd crashed while trying to remove a bricks - one selected from each replica set - after shrinking nX3 to nX2 to nX1
- BZ - 1230129 - [SELinux]: [geo-rep]: AVC logged in RHEL6.7 during geo-replication setup between master and slave volume
- BZ - 1230186 - disable ping timer between glusterds
- BZ - 1230202 - [SELinux] [Snapshot] : avc logged in RHEL 6.7 set up during snapshot creation
- BZ - 1230252 - [New] - Creating a brick using RAID6 on RHEL7 gives unexpected exception
- BZ - 1230269 - [SELinux]: [geo-rep]: RHEL7.1 can not initialize the geo-rep session between master and slave volume, Permission Denied
- BZ - 1230513 - Disperse volume : data corruption with appending writes in 8+4 config
- BZ - 1230522 - Disperse volume : client crashed while running IO
- BZ - 1230607 - [geo-rep]: RHEL7.1: rsync should be made dependent package for geo-replication
- BZ - 1230612 - Disperse volume : NFS and Fuse mounts hung with plain IO
- BZ - 1230635 - Snapshot daemon failed to run on newly created dist-rep volume with uss enabled
- BZ - 1230646 - Not able to create snapshots for geo-replicated volumes when session is created with root user
- BZ - 1230764 - RHGS-3.1 op-version need to be corrected
- BZ - 1231166 - Disperse volume : fuse mount hung on renames on a distributed disperse volume
- BZ - 1231210 - [New] - xfsprogs should be pulled in as part of vdsm installation.
- BZ - 1231223 - Snapshot: When Cluster.enable-shared-storage is enable, shared storage should get mount after Node reboot
- BZ - 1231635 - glusterd crashed when testing heal full on replaced disks
- BZ - 1231647 - [SELinux] [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
- BZ - 1231651 - nfs-ganesha: 100% CPU usage with upcall feature enabled
- BZ - 1231732 - Renamed Files are missing after self-heal
- BZ - 1231771 - glusterd: Porting logging messages to new logging framework
- BZ - 1231775 - protocol client : Porting log messages to a new framework
- BZ - 1231776 - protocol server : Porting log messages to a new framework
- BZ - 1231778 - nfs : porting log messages to a new framework
- BZ - 1231781 - dht: Porting logging messages to new logging framework
- BZ - 1231782 - rdma : porting log messages to a new framework
- BZ - 1231784 - performance translators: Porting logging messages to new logging framework
- BZ - 1231788 - libgfapi : porting log messages to a new framework
- BZ - 1231792 - libglusterfs: Porting log messages to new framework and allocating segments
- BZ - 1231797 - tiering: Porting log messages to new framework
- BZ - 1231813 - Packages downgraded in RHGS 3.1 ISO image as compared to RHS 3.0.4 ISO image
- BZ - 1231831 - [RHGSS-3.1 ISO] redhat-storage-server package is not available in the ISO
- BZ - 1231835 - [RHGSS-3.1 ISO] ISO is based out of RHEL-6.6 and and not RHEL-6.7
- BZ - 1232159 - Incorrect mountpoint for lv with existing snapshot lv
- BZ - 1232230 - [geo-rep]: Directory renames are not captured in changelog hence it doesn't sync to the slave and glusterfind output
- BZ - 1232237 - [Backup]: Directory creation followed by its subsequent movement logs a NEW entry with the old path
- BZ - 1232272 - [New] - gluster-nagios-addons is not present in default ISO installation.
- BZ - 1232428 - [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usable state
- BZ - 1232603 - upgrade and install tests failing for RHGS 3.1 glusterfs client packages due to failed dependencies on glusterfs-client-xlators
- BZ - 1232609 - [geo-rep]: RHEL7.1 segmentation faults are observed on all the master nodes
- BZ - 1232624 - gluster v set help needs to be updated for cluster.enable-shared-storage option
- BZ - 1232625 - Data Tiering: Files not getting promoted once demoted
- BZ - 1232641 - while performing in-service software upgrade, gluster-client-xlators, glusterfs-ganesha, python-gluster package should not get installed when distributed volume up
- BZ - 1232691 - [RHGS] RHGS 3.1 ISO menu title is obsolete
- BZ - 1233033 - nfs-ganesha: ganesha-ha.sh --refresh-config not working
- BZ - 1233062 - [Backup]: Modify after a rename is getting logged as a rename entry (only) in the outfile
- BZ - 1233147 - [Backup]: Rename and simultaneous movement of a hardlink logs an incorrect entry of RENAME
- BZ - 1233248 - glusterfsd, quotad and gluster-nfs process crashed while running nfs-sanity on a SSL enabled volume
- BZ - 1233486 - [RHGS client on RHEL 5] Failed to build *3.7.1-4 due to missing files
- BZ - 1233575 - [geo-rep]: Setting meta volume config to false when meta volume is stopped/deleted leads geo-rep to faulty
- BZ - 1233694 - Quota: Porting logging messages to new logging framework
- BZ - 1234419 - [geo-rep]: Feature fan-out fails with the use of meta volume config
- BZ - 1234720 - glusterd: glusterd crashes while importing a USS enabled volume which is already started
- BZ - 1234725 - [New] - Bricks fail to restore when a new node is added to the cluster and rebooted when having management and data on two different interfaces
- BZ - 1234916 - nfs-ganesha:acls enabled and "rm -rf" causes ganesha process crash
- BZ - 1235121 - nfs-ganesha: pynfs failures
- BZ - 1235147 - FSAL_GLUSTER : symlinks are not working properly if acl is enabled
- BZ - 1235225 - [geo-rep]: set_geo_rep_pem_keys.sh needs modification in gluster path to support mount broker functionality
- BZ - 1235244 - Missing trusted.ec.config xattr for files after heal process
- BZ - 1235540 - peer probe results in Peer Rejected(Connected)
- BZ - 1235544 - Upcall: Directory or file creation should send cache invalidation requests to parent directories
- BZ - 1235547 - Discrepancy in the rcu build for rhel 7
- BZ - 1235599 - Update of rhs-hadoop packages in RHEL 6 RH-Common Channel for RHGS 3.1 release
- BZ - 1235613 - [SELinux] SMB: SELinux policy to be set for /usr/sbin/ctdbd_wrapper.
- BZ - 1235628 - Provide and use a common way to do reference counting of (internal) structures
- BZ - 1235735 - glusterfsd crash observed after upgrading from 3.0.4 to 3.1
- BZ - 1235776 - libxslt package in RHGS 3.1 advisory is older in comparison to already released package
- BZ - 1236556 - Ganesha volume export failed
- BZ - 1236980 - [SELinux]: RHEL7.1CTDB node goes to DISCONNECTED/BANNED state when multiple nodes are rebooted
- BZ - 1237053 - Consecutive volume start/stop operations when ganesha.enable is on, leads to errors
- BZ - 1237063 - SMB:smb encrypt details to be updated in smb.conf man page for samba
- BZ - 1237065 - [ISO] warning: %post(samba-vfs-glusterfs-0:4.1.17-7.el6rhs.x86_64) scriptlet failed, exit status 255 seen in install.log
- BZ - 1237085 - SMB: smb3 encryption doesn't happen when smb encrypt is set to enabled for global and for share
- BZ - 1237165 - Incorrect state created in '/var/lib/nfs/statd'
- BZ - 1238149 - FSAL_GLUSTER : avoid possible memory corruption for inherit acl
- BZ - 1238156 - FSAL_GLUSTER : all operations on deadlink will fail when acl is enabled
- BZ - 1238979 - Though nfs-ganesha is not selected while installation, packages is getting installed
- BZ - 1239057 - ganesha volume export fails in rhel7.1
- BZ - 1239108 - Gluster commands timeout on SSL enabled system, after adding new node to trusted storage pool
- BZ - 1239280 - glusterfsd crashed after volume start force
- BZ - 1239317 - quota+afr: quotad crash "afr_local_init (local=0x0, priv=0x7fddd0372220, op_errno=0x7fddce1434dc) at afr-common.c:4112"
- BZ - 1240168 - Glustershd crashed
- BZ - 1240196 - Unable to pause georep session if one of the nodes in cluster is not part of master volume.
- BZ - 1240228 - [SELinux] samba-vfs-glusterfs should have a dependency on selinux packages (RHEL-6.7)
- BZ - 1240233 - [SELinux] samba-vfs-glusterfs should have a dependency on some selinux packages (RHEL-7.1)
- BZ - 1240245 - Disperse volume: NFS crashed
- BZ - 1240251 - [SELinux] ctdb should have a dependency on selinux packages (RHEL-6.7)
- BZ - 1240253 - [SELinux] ctdb should have a dependency on selinux packages (RHEL-7.1)
- BZ - 1240617 - Disperse volume : rebalance failed with add-brick
- BZ - 1240782 - Quota: Larger than normal perf hit with quota enabled.
- BZ - 1240800 - Package on ISO RHGSS-3.1-20150707.n.0-RHS-x86_64-DVD1.iso missing in yum repos
- BZ - 1241150 - quota: marker accounting can get miscalculated after upgrade to 3.7
- BZ - 1241366 - nfs-ganesha: add-node logic does not copy the "/etc/ganesha/exports" directory to the correct path on the newly added node
- BZ - 1241449 - [ISO] RHGSS-3.1-RHEL-7-20150708.n.0-RHGSS-x86_64-dvd1.iso installation fails - NoSuchPackage: teamd
- BZ - 1241772 - rebase gstatus to latest upstream
- BZ - 1241839 - nfs-ganesha: bricks crash while executing acl related operation for named group/user
- BZ - 1241843 - CTDB:RHEL7: Yum remove/install ctdb gives error in pre_uninstall and post_install sections and fails to remove ctdb package
- BZ - 1241996 - [ISO] RHEL 7.1 based RHGS ISO uses workstation not server
- BZ - 1242162 - [ISO] RHEL 7.1 based RHGS ISO does not have "openssh-server" installed and thus prevents ssh login
- BZ - 1242367 - with Management SSL on, 'gluster volume create' crashes glusterd
- BZ - 1242423 - Disperse volume : client glusterfs crashed while running IO
- BZ - 1242487 - [SELinux] nfs-ganesha: AVC denied for nfs-ganesha.service , ganesha cluster setup fails in Rhel7
- BZ - 1242543 - replacing a offline brick fails with "replace-brick" command
- BZ - 1242767 - SMB: share entry from smb.conf is not removed after setting user.cifs and user.smb to disable.
- BZ - 1243297 - [ISO] Packages missing in RHGS-3.1 el7 ISO
- BZ - 1243358 - NFS hung while running IO - Malloc/free deadlock
- BZ - 1243725 - Do not install RHEL-6 tuned profiles in RHEL-7 based RHGS
- BZ - 1243732 - [New] - vdsm: wrong package mapping
- BZ - 1244338 - Cannot install IPA server and client (sssd-common) on RHGS 3.1 on RHEL 7 because of version conflict in libldb
- BZ - 1245563 - [RHGS-AMI] Root partition too small and not configurable
- BZ - 1245896 - rebuild rhsc-doc with latest doc build
- BZ - 1245988 - repoclosure complains that pcs-0.9.139-9.el6.x86_64 has unresolved dependencies
- BZ - 1246128 - RHGSS-3.1-RHEL-6-20150722.2-RHS-x86_64-DVD1.iso contains glusterfs-resource-agents which should be removed
- BZ - 1246216 - i686 packages in RHGS ISO that are absent in puddle repos [el6]
Red Hat Enterprise Linux Server 6
SRPM | |
---|---|
glusterfs-3.7.1-11.el6.src.rpm | SHA-256: 1b254d2921d992e549b8791bc115d50da8f5f9e140ea89c4185bfaadc31e998d |
x86_64 | |
glusterfs-3.7.1-11.el6.x86_64.rpm | SHA-256: 6db607c5f0a9469be6d0d6ffb30ec4674199af2e95e2a96745d79590336ff11f |
glusterfs-api-3.7.1-11.el6.x86_64.rpm | SHA-256: e80b6f6eda984e05d1ac7eba19f547740e6e8015c3d40cee68042d00868d7a9e |
glusterfs-api-devel-3.7.1-11.el6.x86_64.rpm | SHA-256: a844931edfa44b00c5d1f482a7329bb6e4ede5ef1a44addf8544caea3e78a30d |
glusterfs-cli-3.7.1-11.el6.x86_64.rpm | SHA-256: cf84e8bcb82a07fceb60bbd1ad4a1e057922d6390d007fdf81719058cfd38a75 |
glusterfs-client-xlators-3.7.1-11.el6.x86_64.rpm | SHA-256: 5a07821d2ff75d6318accbffc089513ba2857d7217f4fc34c6f062bd9bfbdcb1 |
glusterfs-debuginfo-3.7.1-11.el6.x86_64.rpm | SHA-256: 774f8397355a07d45e32803b454d2cfd73340f8758bdb9658b11cefc16f9a2f3 |
glusterfs-devel-3.7.1-11.el6.x86_64.rpm | SHA-256: b776898fb8a3799399f27a65a76e4e374b2503c0fc74731e1cc2359f963ec8bf |
glusterfs-fuse-3.7.1-11.el6.x86_64.rpm | SHA-256: c0f3c6b9a140170b84d631df33ac7940a2189b69c9fab573a279d22d2cf8646d |
glusterfs-libs-3.7.1-11.el6.x86_64.rpm | SHA-256: 97b42ec5bd8797c32f1e0b021572f8b6faf2e14878dab1c5a7d91feefd4e0c39 |
glusterfs-rdma-3.7.1-11.el6.x86_64.rpm | SHA-256: 68c89941fd85abd762610f86aa3895153311fa0c278e64556577fabe53a1d93d |
python-gluster-3.7.1-11.el6.x86_64.rpm | SHA-256: af237f4c603c9de54ec31aa56f4507090ff378ff8623cd86459cae5d14f8d839 |
Red Hat Enterprise Linux Server 5
SRPM | |
---|---|
glusterfs-3.7.1-11.el5.src.rpm | SHA-256: 9e352d857468c0aa6a4d6315fc8ca6430d7b80770a8c19050badc970d76d8c95 |
x86_64 | |
glusterfs-3.7.1-11.el5.x86_64.rpm | SHA-256: 600470c9122d2057fd45de63b05dc6b7f3260502a907c92623487cecf37fb9ea |
glusterfs-api-3.7.1-11.el5.x86_64.rpm | SHA-256: 4e365dabe5890369e3c2ac676032b13047e9410caa1dd206bef21eb135333380 |
glusterfs-api-devel-3.7.1-11.el5.x86_64.rpm | SHA-256: 7cbd07c525354a686e05d2958c9b36d69be7eef5d1c33c0b912204a78cb03fdd |
glusterfs-cli-3.7.1-11.el5.x86_64.rpm | SHA-256: 40eb59b9af0553871c5b0dd8e74b493e2cef0a710bd3b736be553d701dd9ba58 |
glusterfs-client-xlators-3.7.1-11.el5.x86_64.rpm | SHA-256: d035dbbcde681acebdff9aff9c0e039d96bef87b4f69eca5a68b25c1cd6f307a |
glusterfs-debuginfo-3.7.1-11.el5.x86_64.rpm | SHA-256: 0bf7c7b1d25c101d6dbff7880f98ba490c66ff607a9de1b8ac458b8aeebbc2aa |
glusterfs-devel-3.7.1-11.el5.x86_64.rpm | SHA-256: 39df6ade3b29a66535d81b8598fcec498d8bc8a8ef5bc9104c7dcde0ff03b2a6 |
glusterfs-fuse-3.7.1-11.el5.x86_64.rpm | SHA-256: 9cc30007bfc2605e9208a280247775ac028c25e003b16eda4c5bcd8bb172aee3 |
glusterfs-libs-3.7.1-11.el5.x86_64.rpm | SHA-256: b56e91f563f0f1c4d9a0fe1d51889c01f3c913a15e08d8653b87a868005e3e71 |
glusterfs-rdma-3.7.1-11.el5.x86_64.rpm | SHA-256: a2c2123ea50ffd9c44991394ff1684aba3ece6265005864a0901d4fd570124dc |
python-gluster-3.7.1-11.el5.x86_64.rpm | SHA-256: ce5c3623de7f9ce31550d4becb88062f528ac12f1a9e9f2cac94b2cc193995f4 |
Red Hat Gluster Storage Server for On-premise 3 for RHEL 6
SRPM | |
---|---|
augeas-1.0.0-10.el6.src.rpm | SHA-256: ed0aa3ca6de32312b082bd5475d83fc291ec980867fd2788e8cb02feacdb2597 |
clufter-0.11.2-1.el6.src.rpm | SHA-256: 4553dcdda9a4217aaa8fa8752d29a8f7c3dbfeacd71d5fc3855d390bbe9b0aef |
cluster-3.0.12.1-73.el6.src.rpm | SHA-256: 662b5e3265ae229d9aaf06c6ae1bea9bdacb3602205e98da1693b216d41519e1 |
clustermon-0.16.2-31.el6.src.rpm | SHA-256: 03e3531956e287f28d62c1686778d3ae6766cc6da69278d776f7f11c7eae7844 |
corosync-1.4.7-2.el6.src.rpm | SHA-256: 3c8dd577684b716f3fc372dab9dda9195f99875978cfaacfae90151d65a0e88b |
ctdb2.5-2.5.5-7.el6rhs.src.rpm | SHA-256: dd0f2572d33cf5aa00807d47ae57a7720aeb24e2606d0426c334ee99abb9fd54 |
fence-virt-0.2.3-19.el6.src.rpm | SHA-256: 59008b137ad2bd53ae46d12c3d0f71689b2280def9ccdede151b339bf82f30ee |
gluster-nagios-addons-0.2.4-4.el6rhs.src.rpm | SHA-256: 9a0f5464b3727fd8d551128d1e1b16ecc7d7b37c0856245661cf33f4bb513fc0 |
gluster-nagios-common-0.2.0-1.el6rhs.src.rpm | SHA-256: fa920b34e00fae56081878f3c97c6319ff4e7ed7ee6c992395c10fec1ff987c7 |
glusterfs-3.7.1-11.el6rhs.src.rpm | SHA-256: 12079e6acc7f04835fafb80efc653bb411b413c625acdebd766b8bb64792c1c0 |
gstatus-0.64-3.1.el6rhs.src.rpm | SHA-256: 4e0ce744d6de75ece8827c22e808cd8713069cb4af8f568f3e24a50838382744 |
libqb-0.17.1-1.el6.src.rpm | SHA-256: f78aeaff0887b721db5e8bd64ad644da401184d694d626b968653ace878db5a9 |
libtalloc-2.1.1-4.el6rhs.src.rpm | SHA-256: 7ffd7f63dc05ec35a68d622f743fc119504129a1a5e47ac01f34d88e5f7ca785 |
libvirt-0.10.2-54.el6.src.rpm | SHA-256: cd770c763d4585b1eb59a64121e5bbd0e2f41e7ae394269e533e3920254d16f2 |
nagios-plugins-1.4.16-12.el6rhs.src.rpm | SHA-256: 3986ddca26b2b57bdec3c9c59180839ab83ef18cfd26f56af828e41ec6aa3616 |
nrpe-2.15-4.1.el6rhs.src.rpm | SHA-256: 58e6a7d8055170d7edcc7d4357ed5cd687bc10ac56c0067baa0362627a74fb3d |
openais-1.1.1-7.el6.src.rpm | SHA-256: 8d443b399c57c5218aaf64dee8a84860221a29725555e02ea54d8654576fab18 |
openstack-swift-1.13.1-4.el6ost.src.rpm | SHA-256: b0b7d63fa50bdf9429ac9c9e1d680d283e6b881474ef87cfb0d9c7ea599ac8d7 |
pacemaker-1.1.12-8.el6.src.rpm | SHA-256: 0741cffca6da8e208787d2eba01442382750c68f03d8315fd4bfc32130ed0c15 |
pcs-0.9.139-9.el6.src.rpm | SHA-256: 4353ca4bce8330ba67811a8321de2141ccb03c1da6fb739ab533eae8cfd7f5a7 |
python-blivet-1.0.0.2-1.el6rhs.src.rpm | SHA-256: 903ea08e171967d5ec82d6745b41b773e7ad18b833a23e0db2d6e8ca5eba47fa |
python-cpopen-1.3-4.el6_5.src.rpm | SHA-256: babd41022c1034361c86606933cf63fe1388f095a7946196ed2b139865310197 |
python-eventlet-0.14.0-1.el6.src.rpm | SHA-256: a7686004e1188efda75ef076346a406b88a5e85d122f61d65fd33c101c209015 |
python-greenlet-0.4.2-1.el6.src.rpm | SHA-256: e8eedf67fb2d6aa580c2d95ac78f7a6dac6a78cc49b3f82d536d8ca0eda4c5b9 |
python-keystoneclient-0.9.0-5.el6ost.src.rpm | SHA-256: e897929bfad6e50d37575a5201b3991d996a868c9e3408776c8f6f409185c993 |
python-prettytable-0.7.2-1.el6.src.rpm | SHA-256: 011cda3922ebeec79d2356b808842a737e9dc975611e24ff2472f778817126d8 |
python-pyudev-0.15-2.el6rhs.src.rpm | SHA-256: cb394f0ea8e035d30e560459e07e524faeb75c4e1d50fd07a7bb819a39fa2697 |
redhat-storage-logos-60.0.20-1.el6rhs.src.rpm | SHA-256: f0ba83d750d91320cc22b32aa1cf986101110c3a69eb2641270d16021e1cc449 |
redhat-storage-server-3.1.0.3-1.el6rhs.src.rpm | SHA-256: 2ee43a91060c6f6d0022497a12a0cfb5ac8fe6a2cf2a5b5c3607035d1c3545e3 |
resource-agents-3.9.5-24.el6.src.rpm | SHA-256: 9453e06690e45b9628a3ba13a7e57cb7839f7c7342e2e89562d6035e9df38588 |
ricci-0.16.2-81.el6.src.rpm | SHA-256: 9e16a7eda0967c0635f70eaa71b6779f49cf99ef1d8acf09750527b085f8bee8 |
userspace-rcu-0.7.9-2.el6rhs.src.rpm | SHA-256: c17522788215f0a6d0ad85a536b96e889446eab823c5fdf38eccbba134456afd |
vdsm-4.16.20-1.2.el6rhs.src.rpm | SHA-256: b3bbbecd06a83a7f944f8852f13b5df4872455f274d745dd8dd2b7ee0337f2cb |
x86_64 | |
augeas-1.0.0-10.el6.x86_64.rpm | SHA-256: 468a97d5e169a7175d9327a4500651de7a35bf534b32752866454ecfa1339608 |
augeas-debuginfo-1.0.0-10.el6.x86_64.rpm | SHA-256: a2ebd28cab24686eeeb3cc49eb447a7c10789baa1ce24b87d9a9c03fb02042b5 |
augeas-devel-1.0.0-10.el6.x86_64.rpm | SHA-256: a69c1609c677961b8ba1fad3784b74b855f3a67ba87bc5b6772fc65fa4913fee |
augeas-libs-1.0.0-10.el6.x86_64.rpm | SHA-256: 827360fc299d2dbf5d697684ff49403d20ed98d9ee9933c04e9cd849af1802ff |
ccs-0.16.2-81.el6.x86_64.rpm | SHA-256: 4ab312cc50fe40d441f17505ba3da383f8673805ae997d089742b51e0fb8f9cd |
clufter-cli-0.11.2-1.el6.noarch.rpm | SHA-256: adbdbd9cf7a910ead7cd63ee9956f280c0d58bcbe43fa23896da59f085ca4f31 |
clufter-debuginfo-0.11.2-1.el6.x86_64.rpm | SHA-256: bb4e86fbd7ff155b8e7b81ec6eb1ad6111b0d690b955f58aacf8d60939d72f79 |
clufter-lib-ccs-0.11.2-1.el6.noarch.rpm | SHA-256: 993c1358bd1ae9d2ed7ad524762d4ce6edd394d9ac24e12dc2eeaa4a9718eb31 |
clufter-lib-general-0.11.2-1.el6.noarch.rpm | SHA-256: c83ed71fa8a52bd045c36bff7b021b42fcf54f77cb58912a7118f512da8a8602 |
clufter-lib-pcs-0.11.2-1.el6.noarch.rpm | SHA-256: d9f968b832cd2321755c2d7c033fb0e78d077b360d1f5adf8cd77872bd9e450f |
cluster-cim-0.16.2-31.el6.x86_64.rpm | SHA-256: feb0d4f037ccd9cc031bb18eae64953ffa17e7c3a0bb403d2c4a57b23d18bd88 |
cluster-debuginfo-3.0.12.1-73.el6.x86_64.rpm | SHA-256: e9a4c4f032cb44d9cc936e0de43563c78c9f80072d91d79dbcca75ce2360c5ce |
cluster-snmp-0.16.2-31.el6.x86_64.rpm | SHA-256: c95458f9ef34b4c28c98c0c67dbae67d8e4eec49f6cf8283b338cb7ef2af81e6 |
clusterlib-3.0.12.1-73.el6.x86_64.rpm | SHA-256: 912c2e50566a4ea593887712dcd8b43ddab43187b412fba2377000adec6e0732 |
clusterlib-devel-3.0.12.1-73.el6.x86_64.rpm | SHA-256: 0156594c433ced00a6f2ff9977fced393d20baf9d3b62c3feb31c053a8d582c4 |
clustermon-debuginfo-0.16.2-31.el6.x86_64.rpm | SHA-256: 1e0ce0663e32d506b655060c97f6dbbc1e7598c5b65b602611fea1e999b9838a |
cman-3.0.12.1-73.el6.x86_64.rpm | SHA-256: 71fb4fd6c49e561696d9e3ab92f47e06898fa72289ef26f0144e5becb7de3eb8 |
corosync-1.4.7-2.el6.x86_64.rpm | SHA-256: 5b3b3a17381d149fbb0dc30e855d8f1fb2f155d43573bf7c748cb0c4c862644f |
corosync-debuginfo-1.4.7-2.el6.x86_64.rpm | SHA-256: c85f075cfc9e712aa31b8550293891bac2f9b4d3def53b476499a90a86b89d7b |
corosynclib-1.4.7-2.el6.x86_64.rpm | SHA-256: c9edae6420e54c441a8c450cfd73e748434423a8b5972289c81c64da883d58da |
corosynclib-devel-1.4.7-2.el6.x86_64.rpm | SHA-256: 3e5a0b70844b4c06287134b85aefc06bb0083277a6f456551f46ee44f6eb6b70 |
ctdb2.5-2.5.5-7.el6rhs.x86_64.rpm | SHA-256: 2c4b3c4ebbf40154bf70acf9759329057d54ed8aef9492992fcb2fb1e65e5fd7 |
ctdb2.5-debuginfo-2.5.5-7.el6rhs.x86_64.rpm | SHA-256: 202ce8889bfd1407ab519b880ee070f7110dba22d8b8d2a4989a9d860a8d29b4 |
fence-virt-0.2.3-19.el6.x86_64.rpm | SHA-256: 426061b66acb77dcf1a98e6aeaeb9e676837f00af6aa270bd5ca2a7fd0cc7191 |
fence-virt-debuginfo-0.2.3-19.el6.x86_64.rpm | SHA-256: 7cbcbaf2bbeec63ab77275cce31e00d480b5550fc5019e0c596ede9b9b85dc02 |
fence-virtd-0.2.3-19.el6.x86_64.rpm | SHA-256: d62aadbe9f8fb00c08aa63f8811450b2ccc08c69c935e7ed2fa5f39746ea53f6 |
fence-virtd-checkpoint-0.2.3-19.el6.x86_64.rpm | SHA-256: 6c077cb22c49a726040491d56d85c3a4b477edee78deb85b97ff6185d99ae117 |
fence-virtd-libvirt-0.2.3-19.el6.x86_64.rpm | SHA-256: ed667566dcbf6ae469cd61ce50df10562d87d0ff19ecc0442b51bd5d09180704 |
fence-virtd-multicast-0.2.3-19.el6.x86_64.rpm | SHA-256: 09945f1393878349c4e5a3a90248cfc288884775ee70fce07c2d7e8cce8bcfaf |
fence-virtd-serial-0.2.3-19.el6.x86_64.rpm | SHA-256: 6a620038d3fdcfa2c66be098c25b79c2a0ab97e2bd6dc5b0ca6283630501742a |
gfs2-utils-3.0.12.1-73.el6.x86_64.rpm | SHA-256: cc6faca8f4a1e2142776c7475e73551143421f50837e6fe9413164c4a18f0c4b |
gluster-nagios-addons-0.2.4-4.el6rhs.x86_64.rpm | SHA-256: 98c0b64624d46918405085e600afc01cce8ca8d9f3344fe8874147603fd414d6 |
gluster-nagios-addons-debuginfo-0.2.4-4.el6rhs.x86_64.rpm | SHA-256: d9f66c80505a9793aa68aa143ca81e02cbee34a9d903e05b5457318d86d4e946 |
gluster-nagios-common-0.2.0-1.el6rhs.noarch.rpm | SHA-256: 4b92192ddbbf2f30ec072f138f0e1990b5a164bd5a8983d1d0021b4a81d6a737 |
glusterfs-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: 7887dbfb07e8d4134f5ecabde1ab751dbe4533cbffecb0346b4c0d6ef7ea1a87 |
glusterfs-api-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: 8825720e71cf3450c5ee024a220d2408380afb2772eebf108cc2b3d774f9391f |
glusterfs-api-devel-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: cc6d5bce01bb039132c4ab7255d2e7ecb8063c68bc93cc8bb845ec9d6a309206 |
glusterfs-cli-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: 3e71c8ab33d2aa47feb6c77bed9fcb6cf2a1f5ab90cdb46e48372cfca8a7f207 |
glusterfs-client-xlators-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: 57013c9850e090b8b02158a3d0ad2bb68f46b1feb87e865653e545c59ec64570 |
glusterfs-debuginfo-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: 183b779919982d5875d5c7b487b307e62e81c3b2f03984ed7aa3d228a5e8850d |
glusterfs-devel-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: 2a8854a7277a182db0d95765b4820f88bdbe8c950da24e6ec484ab4a3d7a43fc |
glusterfs-fuse-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: da6a909f08fbe796dba64ec1ec0f653c1b2febf4fbf923af46a223fbca14017b |
glusterfs-ganesha-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: 34d783591bf96d139143901e7ba6bcbd1c43b6bad4bb28de3b075403043850e0 |
glusterfs-geo-replication-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: 9c43acde01cbae99b526066aeba24376905a7214bdf9c9dc5b1ad08143b0ceb6 |
glusterfs-libs-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: d9548a199476a9647e27cda14e259e7d59c4e727f62f78904bf50025cf9d253a |
glusterfs-rdma-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: b8c65b4319bd206d217650ba15fafde76ccafcff585683a94ac811f7cf59cc26 |
glusterfs-server-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: 486538783c74e7acc63ba59b076cc0820496b6a5e194cb1b3ffe3a1d05f87863 |
gstatus-0.64-3.1.el6rhs.x86_64.rpm | SHA-256: a1adbbd2a0bfd755e493247efce850290a931c7270dc746bf0c0308ec4a4125d |
gstatus-debuginfo-0.64-3.1.el6rhs.x86_64.rpm | SHA-256: 4dbbc707d63839f82a504cd4926b0944ba672cb756cf3fec6dd22eae82266f16 |
libqb-0.17.1-1.el6.x86_64.rpm | SHA-256: 592f71b0509520c37026c06cd7b5eca1bc3ecffee26000025ec48764cd7888a3 |
libqb-debuginfo-0.17.1-1.el6.x86_64.rpm | SHA-256: 59de72c7f09981b4349963ccfa564741f779c3c53a6ab3aab0b66bd9cfe59114 |
libqb-devel-0.17.1-1.el6.x86_64.rpm | SHA-256: 2c8faf9da5ec5a2ae1c65926ae803cc259e5bd45ee63524a74369f6e15a30f02 |
libtalloc-2.1.1-4.el6rhs.x86_64.rpm | SHA-256: 6014e0d1b5de679b8739ea80121996be2df4f2fa9f8129ef013bebffe6dc21e8 |
libtalloc-debuginfo-2.1.1-4.el6rhs.x86_64.rpm | SHA-256: 7c0e021dcadecb2120d2f6b36ee92c7f0a69a6a072484a078e550115313cfa43 |
libtalloc-devel-2.1.1-4.el6rhs.x86_64.rpm | SHA-256: 4c0dd52525999051b5584ad9b12826be7b7b59155cffe6cb0b1ff747b5e61fc8 |
libvirt-debuginfo-0.10.2-54.el6.x86_64.rpm | SHA-256: 35995247845ee0b0475774fc173da5e0e399c71203ffb15c3c3d822a682e99e8 |
libvirt-lock-sanlock-0.10.2-54.el6.x86_64.rpm | SHA-256: af7d745185fb7e585ee2492e5336648713ba2d30faffc05d8f162af256ebd874 |
modcluster-0.16.2-31.el6.x86_64.rpm | SHA-256: e99d458c955103d6200b3b1fc73dda5385c9403268dcbc38b67a1fa6bd238b40 |
nagios-plugins-1.4.16-12.el6rhs.x86_64.rpm | SHA-256: 92278021a8dbd818afe0ee8894dbdba0e112957f0ec91a0af402eae28527a796 |
nagios-plugins-debuginfo-1.4.16-12.el6rhs.x86_64.rpm | SHA-256: 4624666bbd0031a72aa2018d748788d628826fc95beefeb1018095281cc14cf5 |
nagios-plugins-ide_smart-1.4.16-12.el6rhs.x86_64.rpm | SHA-256: ac8a192f368757212afb6aabd9757f22ae0b257009ce332327f2627d802ddec7 |
nagios-plugins-procs-1.4.16-12.el6rhs.x86_64.rpm | SHA-256: 496a72c5077377d35f1ddbfed6afa52bb24ca7bba777c9e76bf0a716fe6da7f5 |
nrpe-2.15-4.1.el6rhs.x86_64.rpm | SHA-256: 493c8e172edbf916bd29a79940059eeb812d6e04ee55fca168a5ca1f963f6e17 |
nrpe-debuginfo-2.15-4.1.el6rhs.x86_64.rpm | SHA-256: 2093733b5c27f07ede16de71480bdcf8b47afb57597f02659da244406779999f |
openais-1.1.1-7.el6.x86_64.rpm | SHA-256: ec7244ac55b1a9397e2b65782ddd0566bedd906609932b29921425e09dabae01 |
openais-debuginfo-1.1.1-7.el6.x86_64.rpm | SHA-256: 4f3c5a0842e1b995d4046ef8c686a2f7ee736d31f7706d0ee6a2c782ebf96144 |
openaislib-1.1.1-7.el6.x86_64.rpm | SHA-256: 90468b992f6891ddf40ddd096223dda17d8e7199a976ac5cf37df2bfc1ea0d0f |
openaislib-devel-1.1.1-7.el6.x86_64.rpm | SHA-256: 122737eabde0663f680a57849a269dd9d70f4da8205ace770c1d392478a16ffa |
openstack-swift-1.13.1-4.el6ost.noarch.rpm | SHA-256: 16d4375cb9358cbd2f0e2c7216001719dd5bdde293fcb292e2b5ca6255bf84bd |
openstack-swift-account-1.13.1-4.el6ost.noarch.rpm | SHA-256: 0e838dc722e4774202010f0de038e025472123a13813fb14f9f206e6d452260f |
openstack-swift-container-1.13.1-4.el6ost.noarch.rpm | SHA-256: 705febd2ebe4551efc1cf5d1b778f17735664450a463e67e757fdde88e25a83d |
openstack-swift-doc-1.13.1-4.el6ost.noarch.rpm | SHA-256: 2888f3b745a951beab26c9e1b66d702f61aa73081f534e1ea9402a834e18c472 |
openstack-swift-object-1.13.1-4.el6ost.noarch.rpm | SHA-256: 62734a5a434dc1a84248d25094056e35bceec640b3ad0e15e1f3221974523adb |
openstack-swift-proxy-1.13.1-4.el6ost.noarch.rpm | SHA-256: 95a434ad5601f4976d4d9fecb9a0e2a2885a5d4182cbcc0f47f08d9ba0aa3299 |
pacemaker-1.1.12-8.el6.x86_64.rpm | SHA-256: bdcd7ee062db72a7dcfe39a92bbdfbd27da3b3da7134ad70cd4d9174b6695d46 |
pacemaker-cli-1.1.12-8.el6.x86_64.rpm | SHA-256: 318736bb6075942af46c4688cca79cce92a75d55905809621611b707256d7fb4 |
pacemaker-cluster-libs-1.1.12-8.el6.x86_64.rpm | SHA-256: 46016939c92c3e07820e7d055f6159ab7b01ae4878fc595c0db3215fa82d63c2 |
pacemaker-cts-1.1.12-8.el6.x86_64.rpm | SHA-256: 1a1dcbb112b181fae70380e88a81f7912a4eafc6c6a0492b96860fc8b071cb82 |
pacemaker-debuginfo-1.1.12-8.el6.x86_64.rpm | SHA-256: e21ee3715c618d864b749a024a6d78ddd2f8b8ad7b71736cd077d9b5fbe9a001 |
pacemaker-doc-1.1.12-8.el6.x86_64.rpm | SHA-256: deafac9d5f91a20c19804c6a448f0095bd68e46ce4e991401628d663f78d39b7 |
pacemaker-libs-1.1.12-8.el6.x86_64.rpm | SHA-256: 9229de3d9734c956abffc8028d1c14213b84b972ac3ebf06be3e229db35fdb29 |
pacemaker-libs-devel-1.1.12-8.el6.x86_64.rpm | SHA-256: 5865d835fca908e31cdd64794fcbd432d2bfe785a29e7c7b6c4bacad3084b0c6 |
pacemaker-remote-1.1.12-8.el6.x86_64.rpm | SHA-256: 89f967faa6e85bc0968e07a1d4785bd957de4371ebd72bd52fefa5e8cbc5f755 |
pcs-0.9.139-9.el6.x86_64.rpm | SHA-256: 91a214854160423765cb766c80319fc85efc0bb0f8907c237cd19f9022dab6bb |
pcs-debuginfo-0.9.139-9.el6.x86_64.rpm | SHA-256: bac9ae75301be3c2869bce9f703733bb41f8a568782a998f9c99fbbb0bd57ae2 |
pytalloc-2.1.1-4.el6rhs.x86_64.rpm | SHA-256: c0a04427f17b508e609fc0102f93f0ad2e9935eeab8e542d6736f070e24f767b |
pytalloc-devel-2.1.1-4.el6rhs.x86_64.rpm | SHA-256: 918fdf585404c2bfa931308d42272becbedf31cd8cff0247f6923615aa30bdd8 |
python-blivet-1.0.0.2-1.el6rhs.noarch.rpm | SHA-256: c14acd9cdf421ea0d1649a9bb989e954674e37e692566acda8333d8dda67011e |
python-clufter-0.11.2-1.el6.x86_64.rpm | SHA-256: 076d5288e3159ee3303e08593c90afa2075c37f9b4dd713438bdb036fe75bd2e |
python-cpopen-1.3-4.el6_5.x86_64.rpm | SHA-256: b2ebb6f65d31cf6295d2dafc9a059b6e1c2465f48b488236fa350b754939e117 |
python-cpopen-debuginfo-1.3-4.el6_5.x86_64.rpm | SHA-256: b01462faa7cdec0ed132963f40922b0417c96dc9e046eed57592673c44dc3a75 |
python-eventlet-0.14.0-1.el6.noarch.rpm | SHA-256: b7e8f66d563a4414a1c69a05186b4d17543b3ecda3782bafd7be99c1815dcefa |
python-eventlet-doc-0.14.0-1.el6.noarch.rpm | SHA-256: 5d297bf3925cc6368c84bd75ad46667f839144cfa3322188756584b534b223f6 |
python-gluster-3.7.1-11.el6rhs.x86_64.rpm | SHA-256: f85adbc26a50dafd8c8a63871d27e61a88d69aa6b1dddc91b3968163d7a3c0b7 |
python-greenlet-0.4.2-1.el6.x86_64.rpm | SHA-256: 718a119a6fe8b4edd9a1c27c8399424bb5510f9d6b18eccf864bda81c5443233 |
python-greenlet-debuginfo-0.4.2-1.el6.x86_64.rpm | SHA-256: 8bffbf08e17a83027a83fc47630f9f248618a86fde3922f589c9fce1334d6e39 |
python-greenlet-devel-0.4.2-1.el6.x86_64.rpm | SHA-256: 434d6f2b65e5765b41d768d6ef8667121b2064f59166aceb2747d256313e5cd0 |
python-keystoneclient-0.9.0-5.el6ost.noarch.rpm | SHA-256: c15ee68895ffc920d914ac670593d804f5126dfa82320f09a86679b4ecde2257 |
python-keystoneclient-doc-0.9.0-5.el6ost.noarch.rpm | SHA-256: 4018eb6dcda4c9b637300f5ff3abbe6818db3540b27e4a10b6496d0166fd7487 |
python-prettytable-0.7.2-1.el6.noarch.rpm | SHA-256: 6883158dd9f811aec0837766c4966d7553e4ee42e748ed6bb0b7831f612d3f7e |
python-pyudev-0.15-2.el6rhs.noarch.rpm | SHA-256: 2e94f8c7cbc9bc6bcbb0fbd8ca2b26fd2b9b7ad6a75536c85e503b1fa51a3e6d |
redhat-storage-logos-60.0.20-1.el6rhs.noarch.rpm | SHA-256: 7357a814c3a918b05290d109365558619960367ca568f5ffb9697203900b1a08 |
redhat-storage-server-3.1.0.3-1.el6rhs.noarch.rpm | SHA-256: e918998aa8605b77c3b981a339cacebbf684aaddb14c58c2cc74eea1ca4fc8ca |
resource-agents-3.9.5-24.el6.x86_64.rpm | SHA-256: c553e7a7293d8e8ebb5c9a67007f93765d8e8a828b9752861c4a3272a557c665 |
resource-agents-debuginfo-3.9.5-24.el6.x86_64.rpm | SHA-256: 7a9f8dd9157d9f9d548bf02c444ec49b6b1575c986257c1df7947264e518e4fb |
resource-agents-sap-3.9.5-24.el6.x86_64.rpm | SHA-256: a1e47e72126743fb8209a659c5746c731b36bab59f7358ba3649de8ab52b6f90 |
ricci-0.16.2-81.el6.x86_64.rpm | SHA-256: e5832b6f029db71949845fada3436cc56da74480aefcab013c4bfe56825e8c56 |
ricci-debuginfo-0.16.2-81.el6.x86_64.rpm | SHA-256: cb6bc0a8b7e3f010a3159f9d26e7632ed3efa821574cf626a8824a29826d390e |
userspace-rcu-0.7.9-2.el6rhs.x86_64.rpm | SHA-256: 951fec2dd16e919334d0d4818f2e19810396ee65bf4cf799066bb468fd673b57 |
userspace-rcu-debuginfo-0.7.9-2.el6rhs.x86_64.rpm | SHA-256: 7c1359fa325852da500a2bac45cc2404223c33ec9939b3fc674140e1dae6126f |
userspace-rcu-devel-0.7.9-2.el6rhs.x86_64.rpm | SHA-256: c2e59f42a81795955bff985e07ba35e89084e1d82c5275086b811a4455d04bf8 |
vdsm-4.16.20-1.2.el6rhs.x86_64.rpm | SHA-256: 186533a1d10c27881743a57ac8a5bcddc1f5cf35bc7374b144f4b8287959893b |
vdsm-cli-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: 8da2a6636dfb7e6f74df92dd01e47347a1f4cb50be34ce8e448dfabdad2178ed |
vdsm-debug-plugin-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: c7e1468ee99593cc2f6af1b0abf6e55e313eee4c2e70723a64843c189cecd191 |
vdsm-debuginfo-4.16.20-1.2.el6rhs.x86_64.rpm | SHA-256: f50c232b5b754aa49f3dd3d5aec71c70d552f05fd1ef864c3abe273b74777e15 |
vdsm-gluster-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: ed706bf9980f5fdf6d818eafa8549414aba49c994c27646a03ebbe7097ca02c7 |
vdsm-hook-ethtool-options-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: 64a0cabe9d1d374f735b701317d60f2028b416f5abf543c45fe0b61ae31265c4 |
vdsm-hook-faqemu-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: d0d8fb9486f2a98ebb4224d2edb466a891873e811757678e6e794d06c5fe31aa |
vdsm-hook-openstacknet-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: 7e589a24cc329289be20df52538df3b32e0bc1bd970ecacf834a565909dfa4f5 |
vdsm-hook-qemucmdline-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: 973306aa6cc3a5f5e2c22e5a7b1a769c2943386187073bd52d6c3bfd45f3cba4 |
vdsm-jsonrpc-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: 77c282812324707a9ecf625018493da8c0bfc482640abb37ad7ad24f2975d343 |
vdsm-python-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: 1dbf81a788b0b4124d7a9b68338f9c447e3d610c98f6df6f1233f35f7ad4a3f6 |
vdsm-python-zombiereaper-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: 688d743cac6d70c23e7d40fc5cf27b7c7a0ad1fd259d7e559afa24cb9e733eef |
vdsm-reg-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: 89681c06e85b875aa6fa9695395b7e94e03626d4cfb2271582580f2c1daae37a |
vdsm-tests-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: d1f229b28ce21bfc76132a7be733e2d50a5e10d4edf462b6474e7ff800982897 |
vdsm-xmlrpc-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: fa5fcfb9781a27366f6f6af036eb97e0601edf3ec15c3df5bb978b399072457b |
vdsm-yajsonrpc-4.16.20-1.2.el6rhs.noarch.rpm | SHA-256: 671453bde3d3d49abcc42befbbd209a4b907ea07ecbeb6cc3749bf8a72336bf5 |
Red Hat Gluster Storage Nagios Server 3 for RHEL 6
SRPM | |
---|---|
check-mk-1.2.6p1-3.el6rhs.src.rpm | SHA-256: 313f1ff362c0d13940aba0871728255c227a8589a046a0d629d1d0d75b250166 |
gluster-nagios-common-0.2.0-1.el6rhs.src.rpm | SHA-256: fa920b34e00fae56081878f3c97c6319ff4e7ed7ee6c992395c10fec1ff987c7 |
nagios-plugins-1.4.16-12.el6rhs.src.rpm | SHA-256: 3986ddca26b2b57bdec3c9c59180839ab83ef18cfd26f56af828e41ec6aa3616 |
nagios-server-addons-0.2.1-4.el6rhs.src.rpm | SHA-256: 0599294a51c2f264ea2fa5a5b808d0f5d4ebf15e80c757f38a99c281ece1453a |
nrpe-2.15-4.1.el6rhs.src.rpm | SHA-256: 58e6a7d8055170d7edcc7d4357ed5cd687bc10ac56c0067baa0362627a74fb3d |
pnp4nagios-0.6.22-2.1.el6rhs.src.rpm | SHA-256: 243e8383571ea0358cf623e5bedb3693b5f0f0c8d5cb0546830fa4b8bfbabe5e |
pynag-0.9.1-1.el6rhs.src.rpm | SHA-256: d3a19ffe0bbee9482496fe7522d8ccf3afd31d0dbaa8cd751cff66db6900a640 |
python-cpopen-1.3-4.el6_5.src.rpm | SHA-256: babd41022c1034361c86606933cf63fe1388f095a7946196ed2b139865310197 |
x86_64 | |
check-mk-1.2.6p1-3.el6rhs.x86_64.rpm | SHA-256: 8f597e896b66f46cb930e1d5fd597364205a0c0f6636bfdc2667892ae76b4c4d |
check-mk-livestatus-1.2.6p1-3.el6rhs.x86_64.rpm | SHA-256: 1529d759d0b3a150fc2f627d46bad34f1cc8ef0c31a9422b180dc3f89d5ef082 |
gluster-nagios-common-0.2.0-1.el6rhs.noarch.rpm | SHA-256: 4b92192ddbbf2f30ec072f138f0e1990b5a164bd5a8983d1d0021b4a81d6a737 |
nagios-plugins-1.4.16-12.el6rhs.x86_64.rpm | SHA-256: 92278021a8dbd818afe0ee8894dbdba0e112957f0ec91a0af402eae28527a796 |
nagios-plugins-debuginfo-1.4.16-12.el6rhs.x86_64.rpm | SHA-256: 4624666bbd0031a72aa2018d748788d628826fc95beefeb1018095281cc14cf5 |
nagios-plugins-dummy-1.4.16-12.el6rhs.x86_64.rpm | SHA-256: 78e3940dc1928a5d30396095e1e079d72b3284ad0f1d2c32eb6513e1ac1d8c95 |
nagios-plugins-nrpe-2.15-4.1.el6rhs.x86_64.rpm | SHA-256: b5395d4755d8e98882bf2ed271217e7e94fde2aa525c08b6bbd28df9c508715c |
nagios-plugins-ping-1.4.16-12.el6rhs.x86_64.rpm | SHA-256: ca68717a8f0915bb7e96b4b68221f473d07dc3110f397a4f0b753ba577ed2812 |
nagios-server-addons-0.2.1-4.el6rhs.noarch.rpm | SHA-256: 922678d53c845c82621ca0902f60e50a1069a8897ed82019df56443c5576cbdf |
nrpe-debuginfo-2.15-4.1.el6rhs.x86_64.rpm | SHA-256: 2093733b5c27f07ede16de71480bdcf8b47afb57597f02659da244406779999f |
pnp4nagios-0.6.22-2.1.el6rhs.x86_64.rpm | SHA-256: cc5aad479ae7f4d9d393a51862cb8383a59eed10abf7f7ab4f33c8025eb65289 |
pynag-0.9.1-1.el6rhs.noarch.rpm | SHA-256: c11e759c7d4f7b2ae79a11945ba2719243d076e99357af6cd32cf70cfde4adc8 |
pynag-examples-0.9.1-1.el6rhs.noarch.rpm | SHA-256: c6608f0bb225729a91767067afd3028e74e32c8c9eeba717a225d06040e60af8 |
python-cpopen-1.3-4.el6_5.x86_64.rpm | SHA-256: b2ebb6f65d31cf6295d2dafc9a059b6e1c2465f48b488236fa350b754939e117 |
python-cpopen-debuginfo-1.3-4.el6_5.x86_64.rpm | SHA-256: b01462faa7cdec0ed132963f40922b0417c96dc9e046eed57592673c44dc3a75 |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.