Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

Chapter 4. RHEA-2015:1494-08

The bugs contained in this chapter are addressed by advisory RHEA-2015:1494-08. Further information about this advisory is available at https://rhn.redhat.com/errata/ RHEA-2015:1494-08.html.

gluster-nagios-addons

BZ#1081900
Previously, there was no way to alert the user when split-brain is detected on a replicate volume. Due to this, users did not know the issue to take timely corrective action. With this enhancement, the Nagios plugin for self-heal monitoring has been enhanced to report if any of the entries are in split-brain state. Plugin has been renamed from Volume Self-heal to "Volume Split-brain status".
BZ#1204314
Previously, the Memory utilization plugin was not deducting cached memory from used memory. This caused Nagios to alert for a low memory condition when none actually exists. With this fix, the used value is obtained by deducting cached value from it. Now, the Memory utilization plugin will return the correct value for used memory and hence there will be no false low memory alerts.
BZ#1109744
Previously, there was a misleading notification message that quorum is lost for only one volume even if multiple volumes have lost quorum. With this fix, the notification message is corrected to inform the user that the quorum is lost on the entire cluster.
BZ#1231422
Previously, due to an issue in the package nrpe, the data got truncated while transferring and there was an invalid data available in the rrd database. This caused failures in pnp4nagios charts display. With this fix, the pnp4nagios charts work properly as the entire data gets transferred from nrpe server.

nagios-server-addons

BZ#1119273
Previously, when CTDB was configured and functioning, stopping ctdb service on a node displayed the status of ctdb service as 'UNKNOWN' and the status information was displayed as 'CTDB not Configured' instead of showing a proper critical error message. Due to this wrong error message, user might think that the CTDB is not configured. With this fix, this issue is resolved and correct error messages are displayed.
BZ#1166602
Previously, when glusterd was down on all the nodes in the cluster, the status information for volume status, self-heal, geo-rep status were improperly displayed as "temporary error" instead of "no hosts found in cluster" or "hosts are not up". As a consequence, this confused the user to think that there are some issues with volume status, self-heal, Geo-replication and that needs to be fixed. With this fix, when the glusterd is down in all the nodes of the cluster, Volume Geo Replication ,Volume status,Volume Utilization status will be displayed as "UNKNOWN" with status information "UNKNOWN: NO hosts(with state UP) found in the cluster". The brick status will be displayed as "UNKNOWN" with status information as "UNKNOWN: Status could not be determined as glusterd is not running".
BZ#1219339
Previsouly, the NFS Service, which was running as part of Gluster was shown as 'NFS' in Nagios. In this release, another NFS service called 'NFS Ganesha' is introduced. Hence, displaying only 'NFS' may confuse the user. With this enhancement, the NFS service in Nagios is renamed to 'Gluster NFS'.
BZ#1106421
Previously, the Quorum status was a passive check. As a consequence, the Plugin status is displayed as Pending even if there is no issues with Quorum or quorum is not enabled. With this fix, a freshness check is added. If the plugin is not updated or results are stale by an hour, the freshness check is executed to update the plugin status. If there are no volumes with quorum enabled, the plugin status is displayed as UNKNOWN.
BZ#1177129
Previously, the Nagios plugin monitored if glusterd process is present. As a consequence, the Plugin returned OK status even if the glusterd process is dead but the pid file existed. With this fix, the plugin is updated to monitor glusterd service state and the glusterd service status is now reflected correctly.
BZ#1096159
Previously, the logic for determining volume status was based on the brick status and volume type. But the volume type was not displayed in the service status output. With this fix, the Volume type is shown as part of the volume status info.
BZ#1127657
Previously, the command 'configure-gluster-nagios' which is used to configure Nagios services asks the user to enter Nagios server address (either IP/FQDN), but it did not verify the correctness of the same. As a consequence, the user could enter invalid address and end up configuring Nagios with wrong information. With this fix, command 'configure-gluster-nagios' verifies the address entered by the user to make sure that Nagios is configured correctly to monitor RHGS nodes.

rhsc

BZ#1165677
Now Red Hat Gluster Storage Console supports RDMA transport type for volumes. You can now create and monitor RDMA transport volumes from the Console.
BZ#1114478
With this release of Red Hat Gluster Storage Console, system administrators can install and manage groups of packages through the groupinstall feature of yum. By using yum groups, system administrators need not manually install related packages individually.
BZ#1202731
Previously, the dependency on the rhsc-log-collector package was not specified and hence, the rhsc-log-collector was not updated on running the rhsc-setup command. With this fix, rhsc specification file has been updated and now the rhsc-log-collector package is updated upon running the rhsc-setup command.
BZ#1062612
Previously, when Red Hat Storage 2.1 Update 2 nodes were added to 3.2 cluster, users were allowed to perform rebalance and remove-brick operations which are not supported in 3.2 cluster. As a consequence, further volume operations were not allowed as the volume is locked. With this fix, an error message is displayed when users execute the rebalance and remove brick commands in version 3.2 cluster.
BZ#1105490
Previously, the cookies were not marked secure. As a consequence, cookies without Secure flag is allowed to be transmitted through an unencrypted channel which makes it susceptible to sniffing. With this fix, all the required cookies are marked as secure.
BZ#1233621
Striped Volume types are no longer supported in Red Hat Gluster Storage. Hence, the stripe volume types' options are no longer listed during volume creation.
BZ#1108688
Previously, an image in the Nagios home page was not transferred via SSL and the Security details displayed "Connection Partially Encrypted" message. With this fix, Nagios news feed that contained non-encrypted image is changed and this issue no longer occurs.
BZ#1162055
Now Red Hat Gluster Storage Console can manage and monitor clusters, which are not in the same datacenter, where Red Hat Gluster Storage Console is running. With this enhancement, it can manage the Red Hat Gluster Storage cluster running in remote data center and support Geo-replication feature.
BZ#858940
Red Hat Gluster Storage now runs with SELinux in enforcing mode, and it is recommended that users have setup SELinux correctly. An enhancement has been made to alert users if SELinux is not in enforcing mode. Console now alerts the user if SELinux is in the permissive or disabled mode, and the alerts are shown every hour.
BZ#1224281
An enhancement has been made to allow users to separate management and data traffic from Console. This ensures that the management operations are not disrupted by data traffic and vice versa. This enhancement also provides better utilization of network resources.
BZ#1194150
Previously, only TCP ports were monitored. For RDMA only volumes, TCP port is not applicable and these bricks were marked offline. With this fix, both RDMA and TCP ports are monitored and the bricks reflect the correct status.
BZ#850458
Red Hat Gluster Storage Console now supports Geo-replication feature. Geo-replication provides a distributed, continuous, asynchronous, and incremental replication service from one site to another over Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet. You can now setup geo-replication session, perform geo-replication operations, and also manage source and destination volumes through the Console.
BZ#850472
Red Hat Gluster Storage Console now supports the Snapshot feature. The snapshot feature enables you to create point-in-time copies of Gluster Storage volumes, which you can use to protect data. You can directly access read-only Snapshot copies to recover from accidental deletion, corruption, or modification of the data. Through Red Hat Gluster Storage Console, you can view the list of snapshots and snapshot status, create, delete, activate, deactivate and restore to a given snapshot.
BZ#960069
Previously, the xattrs and residual .glusterfs files were present for previously used bricks. As a consequence, the previously used bricks to create a new volume failed from Red Hat Gluster Storage Console. With this fix, force flag has been added in the option in the UI to pass "force" flag to the volume create command that clears xattrs and allows the bricks to be reused.
BZ#1044124
Previously, the host list was not sorted and displayed in random order in the Hosts drop-down list. With this fix, the Hosts in the hosts drop-down list of Add Brick dialog are now sorted and displayed in order.
BZ#1086718
Previously, the redhat access plugin related answers were not getting written to the answer file during rhsc setup. With this fix, the redhat-access-plugin-rhsc and rhsc-setup-plugin writes the answers to the answer file and does not ask redhat access plugin related questions again.
BZ#1165269
Previously, When you add a Red Hat Gluster Storage node in the Red Hat Gluster Storage Console using its IP address and remove it from the Red Hat Gluster Storage trusted Storage Pool, and consequently use the FQDN of the node to add it again to the trusted storage pool, the operation fails. With this fix, the node can be added successfully using FQDN even if it was earlier added using IP and removed from the trusted storage pool later.
BZ#1224279
An enhancement has been made to allow users to monitor the state of geo-replication sessions from console. Users are now alerted when new sessions are created or when the session status is faulty.
BZ#1201740
Previously, the Red Hat Storage Console override the Red Hat Enterprise Linux values for the vm.dirty_ratio and dirty_background_ratio to 5 and 2 respectively. This occurred when activating the tuned profile 'rhs-virtualization' while adding Red Hat Storage nodes to the Red Hat Storage Console. This had decreased the performance of the Red Hat Storage Trusted Storage Pool. With this fix, users are given an option to choose the tuned profile during cluster creation. Based on his use case, the user he can choose the profile.
BZ#1212513
Dashboard feature has been added to the Red Hat Gluster Storage Console. The Dashboard displays an overview of all the entities in Red Hat Gluster Storage like Hosts, Volumes, Bricks, and Clusters. The Dashboard shows a consolidated view of the system and helps the administrator to know the status of the system.
BZ#1213255
An enhancement has been made to monitor the volume capacity information from a single pane.
BZ#845191
Enhancements have been made to allow users to provision the bricks with recommended configuration and volume creation from a single interface.
BZ#977355
Previously, when a server was down, the error message that was returned did not contain the server name. As a consequence, identifying the server that is down using this error message was not possible. With this fix, the server is easily identifiable from the error message.
BZ#1032020
Previously, there was no error message displayed if a user tries to stop a volume when the remove-brick operation was in progress. With this fix, "Error while executing action: Cannot stop Gluster Volume. Rebalance operation is running on the volume vol_name in cluster cluster_name" error message is displayed.
BZ#1121055
Red Hat Gluster Storage Console now supports monitoring and measuring the performance of Gluster volumes and bricks from the Console.
BZ#1107576
Previously, Console expected a host to be in operational state before allowing the addition of another host. As a consequence, multiple hosts could not be added together. With this fix, multiple hosts can be added together.
BZ#1061813
Previously, users were unable to see the details of files scanned,moved and failed in the task pane after stopping/committing/retaining the remove brick operation. With this fix, this issue is resolved.
BZ#1229173
Previously, the Reinstall button was not available in the Hosts main tab. The Reinstall button was available only as part of Hosts General Tab and it was difficult for the user to go to 'General' to Reinstall the hosts. With this fix, The Reinstall button is available in the 'Hosts' main tab.

rhsc-sdk

BZ#1054827
Now Gluster volume usage statistics available through REST API. The volume usage details are available under /api/clusters/{id}/glustervolumes/{id}/statistics.