Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

4.2. Red Hat Gluster Storage Console

Red Hat Gluster Storage Console

BZ#1303566
When a user selects the auto-start option in the Create Geo-replication Session user interface, the use_meta_volume option is not set. This means that the geo-replication session is started without a metadata volume, which is not a recommended configuration.
Workaround: After session start, go to the geo-replication options tab for the master volume and set the use_meta_volume option to true.
BZ#1246047
If a logical network is attached to the interface with boot protocol DHCP, the IP address is not assigned to the interface on saving network configuration, if DHCP server responses are slow.
Workaround: Click Refresh Capabilities on the Hosts tab and the network details are refreshed and the IP address is correctly assigned to the interface.
BZ#1164662
The Trends tab in the Red Hat Gluster Storage Console appears to be empty after the ovirt engine restarts. This is due to the Red Hat Gluster Storage Console UI-plugin failing to load on the first instance of restarting the ovirt engine.
Workaround: Refresh (F5) the browser page to load the Trends tab.
BZ#1167305
The Trends tab on the Red Hat Gluster Storage Console does not display the thin-pool utilization graphs in addition to the brick utilization graphs. Currently, there is no mechanism for the UI plugin to detect if the volume is provisioned using the thin provisioning feature.
BZ#838329
When incorrect create request is sent through REST api, an error message is displayed which contains the internal package structure.
BZ#1042808
When remove-brick operation fails on a volume, the Red Hat Gluster Storage node does not allow any other operation on that volume.
Workaround: Perform commit or stop for the failed remove-brick task, before another task can be started on the volume.
BZ#1200248
The Trends tab on the Red Hat Gluster Storage Console does not display all the network interfaces available on a host. This limitation is because the Red Hat Gluster Storage Console ui-plugin does not have this information.
Workaround:The graphs associated with the hosts are available in the Nagios UI on the Red Hat Gluster Storage Console. You can view the graphs by clicking the Nagios home link.
BZ#1224724
The Volume tab loads before the dashboard plug-in is loaded. When the dashboard is set as the default tab, the volume sub-tab remains on top of dashboard tab.
Workaround: Switch to a different tab and the sub-tab is removed.
BZ#1225826
In Firefox-38.0-4.el6_6, check boxes and labels in Add brick and Remove Brick dialog boxes are misaligned.
BZ#1228179
gluster volume set help-xml does not list the config.transport option in the UI.
Workaround: Type the option name instead of selecting it from the drop-down list. Enter the desired value in the value field.
BZ#1231725
Red Hat Gluster Storage Console cannot detect bricks that are created manually using the CLI and mounted to a location other than /rhgs. Users must manually type the brick directory in the Add Bricks dialog box.
Workaround: Mount bricks in the /rhgs folder, which are detected automatically by Red Hat Gluster Storage Console.
BZ#1232275
Blivet provides only partial device details on any major disk failure. The Storage Devices tab does not show some storage devices if the partition table is corrupted.
Workaround: Clean the corrupted partition table using the dd command. All storage devices are then synced to the UI.
BZ#1234445
The task-id corresponding to the previously performed retain/stop remove-brick is preserved by engine. When a user queries for remove-brick status, it passes the bricks of both the previous remove-brick as well as the current bricks to the status command. The UI returns the error Could not fetch remove brick status of volume.
In Gluster, once a remove-brick has been stopped, the status can't be obtained.
BZ#1238540
When you create volume snapshots, time zone and time stamp details are appended to the snapshot name. The engine passes only the prefix for the snapshot name. If master and slave clusters of a geo-replication session are in different time zones (or sometimes even in the same time zone), the snapshot names of the master and slave are different. This causes a restore of a snapshot of the master volume to fail because the slave volume name does not match.
Workaround: Identify the respective snapshots for the master and slave volumes and restore them separately from the gluster CLI by pausing the geo-replication session.
BZ#1242128
Deleting a gluster volume does not remove the /etc/fstab entries for the bricks. A Red Hat Enterprise Linux 7 system may fail to boot if the mount fails for any entry in the /etc/fstab file. If the LVs corresponding to the bricks are deleted but not the respective entry in /etc/fstab, then the system may not boot.

Workaround:

  1. Ensure that /etc/fstab entries are removed when the Logical Volumes are deleted from system.
  2. If the system fails to boot, start it in emergency mode, use your root password, remount '/' in rw, edit fstab, save, and then reboot.
BZ#1167425
Labels do not show enough information for the Graphs shown on the Trends tab. When you select a host in the system tree and switch to the Trends tab, you will see two graphs for the mount point '/': one graph for the total space used and another for the inodes used on the disk.

Workaround:

  1. The graph with y axis legend as %(Total: ** GiB/Tib) is the graph for total space used.
  2. The graph with y axis legend as %(Total: number) is the graph for inode usage.
BZ#1134319
When run on versions higher than Firefox 17, the Red Hat Storage Console login page displays a browser incompatibility warning.