- Issued:
- 2018-09-04
- Updated:
- 2018-09-04
RHSA-2018:2616 - Security Advisory
Synopsis
Low: RHGS WA security, bug fix, and enhancement update
Type/Severity
Security Advisory: Low
Red Hat Insights patch analysis
Identify and remediate systems affected by this advisory.
Topic
Updated Red Hat Gluster Storage Wed Administration packages that fix one security issue, several bugs, and add various enhancements are now available for Red Hat Gluster Storage 3.4 on Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Low. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
Description
Red Hat Gluster Storage Web Administration includes a fully automated setup based on Ansible and provides deep metrics and insights into active Gluster storage pools by using the Grafana platform. Red Hat Gluster Storage Web Administration provides a dashboard view which allows an administrator to get a view of overall gluster health in terms of hosts, volumes, bricks, and other components of GlusterFS.
Security Fix(es):
- tendrl-api: Improper cleanup of session token can allow attackers to hijack user sessions (CVE-2018-1127)
For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.
This issue was discovered by Filip Balák (Red Hat).
Additional Changes:
These updated Red Hat Gluster Storage Wed Administration packages include numerous bug fixes and enhancements. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Gluster Storage 3.4 Release Notes for information on the most significant of these changes:
https://access.redhat.com/site/documentation/en-US/red_hat_gluster_storage/ 3.4/html/3.4_release_notes/
All users of Red Hat Gluster Storage are advised to upgrade to these
updated packages, which provide numerous bug fixes and enhancements.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64
- Red Hat Gluster Storage Web Administration (for RHEL Server) 3.1 x86_64
Fixes
- BZ - 1502012 - gluster related stats are not pushed to graphite from collectd
- BZ - 1506123 - [RFE] UI controls to use context switcher
- BZ - 1511993 - Full alert message not visible to user without hovering on the message
- BZ - 1512091 - Event messages are getting truncated
- BZ - 1512696 - Tendrl UI reporting brick is stopped when it's up and running
- BZ - 1512937 - [RFE] Duplicated hosts in Grafana (listed by FQDN and IP)
- BZ - 1513361 - Not working users page filters
- BZ - 1513993 - tendrl services reports too long error lines in system log
- BZ - 1514171 - Data provided by api are not fully encoded in json format, lists are formatted in an escaped strings
- BZ - 1514442 - Successive attempts to import the same cluster on the same webadmin server fail
- BZ - 1515213 - Send password in API function for new user just once
- BZ - 1515252 - API calls with invalid job id return wrong response
- BZ - 1515660 - Tasks filter not showing tasks correctly based on date
- BZ - 1516135 - When import fails, the import button should be accessible only after unmanage
- BZ - 1516417 - Expanding an existing RHGS cluster managed by RHGS WA by adding nodes and monitoring
- BZ - 1517077 - [RFE] Grafana dashboard not showing all the volume in UP mode when brick path has "short names"
- BZ - 1517132 - Time stamp inconsistency for repeated alerts
- BZ - 1517215 - 'Disable' Volume Profiling during cluster import behavior
- BZ - 1517246 - Alerts icon (bell icon) on Web Admin home page needs to show/indicate if there are unread events/alerts
- BZ - 1517270 - missing brick alert when there are sub-volume and quorum alerts
- BZ - 1517422 - [WA] : Volume Overview shows brick count,geo rep sessions as "Invalid Number".
- BZ - 1518276 - Incorrect format of host reported when geo replication status changed
- BZ - 1518516 - Errors in /var/log/messages for non-georep volumes
- BZ - 1518525 - Tendrl-ansible setup script fails if the server has 2 IP addresses
- BZ - 1518610 - Under Tendrl-Gluster-Volumes, deleted vols still present in the list under Volume Name.
- BZ - 1518678 - bricks are marked as down in UI
- BZ - 1518736 - decbytes and bytes on dashboards
- BZ - 1519158 - [Web-Admin] Sorting in RHGSWA is not working with firefox browser
- BZ - 1519178 - Brick Kill followed by Replace brick,shows incorrect brick status on RHGS WA
- BZ - 1519188 - Un-necessary Filter "Brick Status" in Brick Details
- BZ - 1519201 - WA doesn't reflect that all gluster nodes are down
- BZ - 1519218 - After performing volume stop,Tendrl web GUI shows mismatch status for few brick in "brick status" layout
- BZ - 1519724 - [RFE] firewall configuration should be automated in tendrl-ansible
- BZ - 1519750 - [Web-Admin] Healing and rebalance cards are empty for all volume
- BZ - 1520886 - internal server error when user would like to see details of cluster
- BZ - 1525376 - /var/log/tendrl/node-agent directory is created only after host reboot
- BZ - 1526338 - [RFE] Enhance unmanage cluster workflow to remove only specified (affected) cluster
- BZ - 1526375 - tendrl-api rpm %post, %preun, %postun scripts should correctly handle systemd service
- BZ - 1531133 - Brick Utilization: threshold breached Alert needs to reference gluster volume name
- BZ - 1531139 - [RFE] Brick Utilization: threshold breached Alert needs to be generated for brick usage above 90%
- BZ - 1536354 - [GSS] [RFE] Cluster-id should be user-friendly
- BZ - 1538248 - [RFE] Performance Improvements
- BZ - 1542914 - rebase RHGS WA 3.4.0 to upstream tendrl 1.6.3
- BZ - 1546957 - Get profiling status during the sync
- BZ - 1549146 - Some huge numbers reported by grafana are hard to read and understand
- BZ - 1555455 - Job status for import with invalid cluster id remains as new
- BZ - 1558431 - Sorting button not working
- BZ - 1559362 - The import cluster job should be marked finished in import cluster flow
- BZ - 1559364 - The flow ExpandClusterWithDetectedPeers should be targeted to provisioner node in cluster
- BZ - 1559365 - If import cluster fails due to time out, the current job is not marked properly
- BZ - 1559368 - The expand cluster flow for cluster should be user initiated and not automatic
- BZ - 1559373 - User should be able to enable/disable profiling at volume level
- BZ - 1559379 - The cluster level profiling setting for volumes of the cluster should be a async task
- BZ - 1559387 - Back to back import and unmanage cluster multiple time resuts in a situation where import is complete but not marked correctly in UI
- BZ - 1559390 - No filters in 'brick detail' view
- BZ - 1559396 - Host Detail view not matching design by UX
- BZ - 1559399 - Alert count is not incremented for utilization alerts
- BZ - 1559401 - Cluster detail link
- BZ - 1559402 - Data not required for start/stop profiling
- BZ - 1559405 - Alerts which is raised from node-agent is not displayed in UI
- BZ - 1559415 - Provisioner node re-election happens almost continuously
- BZ - 1559416 - node_sync disks sync failed for multi-path devices
- BZ - 1559417 - Remove the provisioning namespace safely
- BZ - 1559421 - Sometimes delete flag for the deleted volumes is changed to False
- BZ - 1559426 - Sometimes monitoring-integration is not creating panels for a particular resource in alert dashbaord
- BZ - 1559432 - Before import cluster monitoring integration consumes lot of CPU and memory
- BZ - 1559433 - Non participating nodes should not send rebalance data for a volume to graphite
- BZ - 1559436 - Add REST end points for getting details of individual cluster
- BZ - 1559486 - Branding should not be in grafana dashboard listbox selection
- BZ - 1559507 - [RFE] Show downstream Gluster version in list of clusters
- BZ - 1559690 - If import cluster failed, the cluster global details status should be set as unhealthy
- BZ - 1559792 - Ansible group names contains dashes, which could cause problems
- BZ - 1559901 - Use "integration_id" instead of "cluster_id"
- BZ - 1560492 - Expand action not getting disabled on cluster list, when no expansion required
- BZ - 1560879 - UI should disable the button when button or link is clicked for profiling
- BZ - 1561374 - Enable/Disable Profiling button should not be visible on volume list page for ready only user
- BZ - 1561428 - User filter not working
- BZ - 1561468 - tendrl-node-agent CPU consumption
- BZ - 1563519 - When gluster-integration goes down or glusterd goes down for few minutes then alert_count for a volumes are initialized by zero
- BZ - 1563648 - Marshal / Un-marshal objects while saving / reading to / from etcd
- BZ - 1564107 - un-manage task managed cluster check
- BZ - 1564175 - False alerts when brick utilization breached 90%
- BZ - 1564423 - Improve messages for tasks/jobs
- BZ - 1564510 - Grafana dashboards with new nodes are created before user initiates cluster expansion
- BZ - 1565479 - no time for updated-at field
- BZ - 1565898 - RHGS-WA should check for build no in addition to NVR while importing a cluster
- BZ - 1570048 - unmanaged task always fails after import failure
- BZ - 1570564 - Tendrl-ansible precheck fails with minimum memory requirement criteria on Tendrl Server
- BZ - 1570616 - Import fails after unmanage of cluster with specified Cluster Name
- BZ - 1571235 - Job thread in all tendrl components consumes lot of cpu and memory utilization
- BZ - 1571244 - Import cluster job fails for a while but then finishes successfully
- BZ - 1571245 - Debug messages are added to the task details
- BZ - 1571280 - Unmanage doesn't start when more clusters are available
- BZ - 1571318 - Grafana dashboards use integration id and cluster short name at the same time
- BZ - 1571325 - Cluster remains listed by its short cluster name after unmanage
- BZ - 1571755 - Expand cluster notifications use integration id instead of cluster name
- BZ - 1571809 - Error: Import existing Gluster Cluster
- BZ - 1572052 - Utilization related alerts from monitoring-integration are displayed in alert page and not in event page
- BZ - 1572090 - Import cluster fails with TypeError
- BZ - 1572118 - ERROR - node_sync SDS detection failed: need more than 0 values to unpack - ValueError
- BZ - 1572151 - A storage node which is peer probe with IP is always showing deleted bricks in UI
- BZ - 1572216 - tendrl-monitoring-integration.service fails to start
- BZ - 1573079 - Node alert count shows NoData in UI
- BZ - 1573110 - Un-managed cluster's alerts are displayed in UI
- BZ - 1573481 - Alert dashboard are not updated when more than one clusters are managed by tendrl
- BZ - 1573928 - It takes time to update user information
- BZ - 1573950 - Email already taken message when changing only password
- BZ - 1574938 - Volume with name 'None' listed in grafana dashboard
- BZ - 1574942 - Expand cluster screen lists all nodes in the cluster
- BZ - 1575040 - Alert dashbaord is not raising alert when cluster is import with shortname
- BZ - 1575835 - CVE-2018-1127 tendrl-api: Improper cleanup of session token can allow attackers to hijack user sessions
- BZ - 1575891 - Load_all function in tendel-common sometimes gives object with wrong info
- BZ - 1576794 - Gluster native event webhook fails sometimes
- BZ - 1576829 - Grafana alert callback webhook fails sometimes
- BZ - 1576848 - [GSS][Excessive number of 'gluster volume profile' commands launched by collectd]
- BZ - 1578009 - brick status tooltiop differs with real values
- BZ - 1578329 - Brick details stops showing data
- BZ - 1578333 - RHGS-WA doesnt show the correct profiling state at cluster level if get-state doesnt provide volume level information of profiling
- BZ - 1578885 - Import cluster error: Cluster with name: %s already exists
- BZ - 1579148 - No tooltip for 'Expanding Cluster'
- BZ - 1579150 - Volume name doesn't show ellipsis for long name
- BZ - 1579152 - Upgrade the version UI npm packages
- BZ - 1579516 - Graph headings are inconsistent. In some cases we are calling graphs as trends which is not right.
- BZ - 1579937 - Duplicate Events are Processed and displayed in UI
- BZ - 1580385 - Node is DOWN alert not cleared properly
- BZ - 1580509 - vm.modalHeader.title tooltips for popup titles
- BZ - 1581212 - Links in Hosts page lead to Grafana dashboard without specified Cluster Name
- BZ - 1581718 - Weekly growth rate and week remaining metrics are not accurate
- BZ - 1581736 - IOPS metric is not intuitive enough
- BZ - 1581789 - Connection trends panel information can be mis-understood by customers.
- BZ - 1582465 - Incorrect infotip for "Ready to Use" text in the WA Clusters interface
- BZ - 1583171 - Utilization notifications use integration id instead of cluster name
- BZ - 1584095 - Unmanage fails after failed import
- BZ - 1584660 - UI text improvement in import cluster workflow
- BZ - 1585116 - Grafana alert dashboard does not raise alerts when nodes have string "tendrl" in hostname
- BZ - 1585715 - Brick Details page is not updated
- BZ - 1586074 - Brick Details brick counter divided to separate lines
- BZ - 1588357 - Sometimes import flow and unmanage flow is failing
- BZ - 1588440 - New volume record with no volume name and -5 alerts
- BZ - 1588650 - discovered host(s) section in import cluster screen is slightly inconsistent/misleading
- BZ - 1590405 - [GSS] RHGSWA ansible playbook runs yum update
- BZ - 1592464 - WA UI - redundant UI text in the Unmanage cluster confirmation box
- BZ - 1592487 - Job sync thread fails when /queue directory becomes empty
- BZ - 1592991 - Connections Panel heading needs to say "Connections" or "Client Connections"
- BZ - 1592992 - Throughput Panel in the overview dashboard needs to specify units
- BZ - 1593640 - After import job failed cluster is marked as managed and ready to use
- BZ - 1593852 - IOPS chart on Disk Load of Brick Dashboard shows no data during brick read/write operation
- BZ - 1593912 - IOPS chart from At Glance section of Host Dashboard reports different values compared to all other IOPS charts
- BZ - 1594762 - No tooltip for 'Unknown cluster'
- BZ - 1594862 - Thresholds for utilization bars and alerts differ
- BZ - 1594899 - Most IOPS charts in At a Glance section of Brick Dashboards shows no data for short or light workloads
- BZ - 1594994 - Text boxes to enter the Web admin UI credentials are much longer than necessary.
- BZ - 1595005 - Ping Latency metric requires clarification
- BZ - 1595013 - Provide the appropriate title for two IOPS panels in host dashboard
- BZ - 1595015 - Disk Load panel in host dashboard (Capacity And Disk Load section) should be called Disk Throughput
- BZ - 1595016 - Provide the correct heading for Disk IO panel in host dashboard (Capacity and Disk load section)
- BZ - 1595052 - Brick dashboard / Disk Load section - Throughput and Latency panel units are confusing
- BZ - 1595295 - Volume:None is unknown alert
- BZ - 1596655 - Unable to fix (rerun) failed cluster expand task
- BZ - 1596820 - alerts "volume <volume name> is unknown" reported during unmanage of cluster which failed to import
- BZ - 1596862 - Improve performance of tendrl components
- BZ - 1597235 - Too much space next to events messages
- BZ - 1599634 - Expand cluster imports only one node
- BZ - 1599985 - Volume details are vanished after sometime in tendrl-ui
- BZ - 1599987 - Growing memory utilization of tendrl-gluster-integration on one node in cluster
- BZ - 1600092 - Importing bigger cluster failing: Timing out import job, Cluster data still not fully updated
- BZ - 1600113 - Invalid volume record when expand cluster is available
- BZ - 1603175 - GET /clusters api call returns "Invalid JSON received." for cluster with geo-replication
- BZ - 1610266 - Inconsistent password length requirements
- BZ - 1611601 - Alert Service: glustershd is disconnected in cluster is not cleared
- BZ - 1616208 - glustershd alerts should mention affected node
- BZ - 1616215 - All alerts Service: glustershd is disconnected in cluster are cleared when service starts on one node
CVEs
Red Hat Gluster Storage Server for On-premise 3 for RHEL 7
SRPM | |
---|---|
python-flask-0.10.1-5.el7rhgs.src.rpm | SHA-256: 54fb2732036bcca01b6658e7d37a2405baf563c2ca52c4604983fb5e0d648c03 |
python-itsdangerous-0.23-2.el7.src.rpm | SHA-256: 01f097042cfb76e310c61c31a2b72e5cc619f491fc5603fe232cf016c59c43aa |
tendrl-commons-1.6.3-12.el7rhgs.src.rpm | SHA-256: 395affbe773e1ceb2f360ba87107605ecebd8d8bd72cd3640dd3816e27f66e5d |
tendrl-gluster-integration-1.6.3-10.el7rhgs.src.rpm | SHA-256: 5e5c23c59445de1ff4dcd4e00406634cac1c25d6e7e93e2ec65cf782e8965856 |
tendrl-node-agent-1.6.3-10.el7rhgs.src.rpm | SHA-256: b328991f32ab3251fb4ed459a15e5b490fcc327a25419f6d67b0097d26ac016d |
x86_64 | |
python-flask-0.10.1-5.el7rhgs.noarch.rpm | SHA-256: a2e4c8dc2f952ea0af13700ed521d94b836a62ee28a8e1c2c5fd377ec55ed96e |
python-flask-doc-0.10.1-5.el7rhgs.noarch.rpm | SHA-256: abe06c1d62d4816ab964591e9fa11d3cdbcf555aedb9acc5f5c818d0a0898ca7 |
python-itsdangerous-0.23-2.el7.noarch.rpm | SHA-256: e4a0aeac61221137ec51238ca7b8b9602d3af14f3f5d79fc9d57c85bf32ef780 |
tendrl-commons-1.6.3-12.el7rhgs.noarch.rpm | SHA-256: 58980bb5fb1e971b6f75a3df89ffd0b2a0077783a7ac1be62b6f2251e925a389 |
tendrl-gluster-integration-1.6.3-10.el7rhgs.noarch.rpm | SHA-256: 32144b3f86cf92fab66effa6ea17953da66e3078dfab0d66d720a7a19490780d |
tendrl-node-agent-1.6.3-10.el7rhgs.noarch.rpm | SHA-256: 51dfb70511d645f65ca19183d331c8f008f7ebc65adc558988a0c10d524dc6ab |
Red Hat Gluster Storage Web Administration (for RHEL Server) 3.1
SRPM | |
---|---|
python-flask-0.10.1-5.el7rhgs.src.rpm | SHA-256: 54fb2732036bcca01b6658e7d37a2405baf563c2ca52c4604983fb5e0d648c03 |
python-itsdangerous-0.23-2.el7.src.rpm | SHA-256: 01f097042cfb76e310c61c31a2b72e5cc619f491fc5603fe232cf016c59c43aa |
tendrl-ansible-1.6.3-7.el7rhgs.src.rpm | SHA-256: 8434df8f184b851d4365f0c738de0f4468b15b12fb2260a7d8fcc8053c86913c |
tendrl-api-1.6.3-5.el7rhgs.src.rpm | SHA-256: 9184cd69ee7e1ec14267cb23f313f78f337e9d4def7b563bac33ce79a0658c3e |
tendrl-commons-1.6.3-12.el7rhgs.src.rpm | SHA-256: 395affbe773e1ceb2f360ba87107605ecebd8d8bd72cd3640dd3816e27f66e5d |
tendrl-monitoring-integration-1.6.3-11.el7rhgs.src.rpm | SHA-256: 6a40ca77e356d64bed4ab40e32cfd320433d3b54b825246010d5ceaf22df210c |
tendrl-node-agent-1.6.3-10.el7rhgs.src.rpm | SHA-256: b328991f32ab3251fb4ed459a15e5b490fcc327a25419f6d67b0097d26ac016d |
tendrl-notifier-1.6.3-4.el7rhgs.src.rpm | SHA-256: a644cdd547016e562e8dc111fb3644b1dc8741fa8e5559a1ec8624bd626bc9c0 |
tendrl-ui-1.6.3-11.el7rhgs.src.rpm | SHA-256: 8aff66860c0ab1bda4f84b67cdea5480b8ff22f75f9a3a5bc44c56fa72b6bbdb |
x86_64 | |
python-flask-0.10.1-5.el7rhgs.noarch.rpm | SHA-256: a2e4c8dc2f952ea0af13700ed521d94b836a62ee28a8e1c2c5fd377ec55ed96e |
python-flask-doc-0.10.1-5.el7rhgs.noarch.rpm | SHA-256: abe06c1d62d4816ab964591e9fa11d3cdbcf555aedb9acc5f5c818d0a0898ca7 |
python-itsdangerous-0.23-2.el7.noarch.rpm | SHA-256: e4a0aeac61221137ec51238ca7b8b9602d3af14f3f5d79fc9d57c85bf32ef780 |
tendrl-ansible-1.6.3-7.el7rhgs.noarch.rpm | SHA-256: e1b0f9598b3b7bf198ce3b4e6bb5995a702e269c84df3f4a31f30706bd31011d |
tendrl-api-1.6.3-5.el7rhgs.noarch.rpm | SHA-256: 1dfbfaca3970a73de41735172e606718624994d99b759e41cb4299199d0dea5c |
tendrl-api-httpd-1.6.3-5.el7rhgs.noarch.rpm | SHA-256: 613b328836935e927713f4cc48754393af4f12d1d6e29d7eeab6d2278e77fa04 |
tendrl-commons-1.6.3-12.el7rhgs.noarch.rpm | SHA-256: 58980bb5fb1e971b6f75a3df89ffd0b2a0077783a7ac1be62b6f2251e925a389 |
tendrl-grafana-plugins-1.6.3-11.el7rhgs.noarch.rpm | SHA-256: 4d882492cf96d1252a3bb8e47d55971dbc1145c9ff74a448cb0fcd1ceae4e21d |
tendrl-monitoring-integration-1.6.3-11.el7rhgs.noarch.rpm | SHA-256: d0373feda6ee05d220948b2ad9425a703398306a9180a7fdacdc5322b48126e2 |
tendrl-node-agent-1.6.3-10.el7rhgs.noarch.rpm | SHA-256: 51dfb70511d645f65ca19183d331c8f008f7ebc65adc558988a0c10d524dc6ab |
tendrl-notifier-1.6.3-4.el7rhgs.noarch.rpm | SHA-256: 28984050183cb157d4d6df36ab9a950fac8c073794936111a45ded81c1a58ffe |
tendrl-ui-1.6.3-11.el7rhgs.noarch.rpm | SHA-256: 1fe479e04ef00a7afef161ff0128c94ab9d9e9be42f12e5b01f65171bea3b02b |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.