Chapter 4. RHEA-2014:0208
The bugs contained in this chapter are addressed by advisory RHEA-2014:0208. Further information about this advisory is available at http://rhn.redhat.com/errata/RHEA-2014-0208.html.
build
- BZ#994889
- Previously, when the Red Hat Storage server was installed using ISO and Red Hat Satellite there was a difference in the installed package listing. A few optional packages were installed making the Red Hat Storage server installation heavy. With this update, the ISO installation matches the packages installed by Red Hat Satellite. Additionally, installation of
system-config-firewall-base/system-config-firewall-tuipackages is expected when Red Hat Storage Server is installed using Red Hat Satellite. - BZ#829734
- Previously, configuration and data files were not removed as expected when glusterFS packages were uninstalled from Red Hat Storage Server. A manual cleanup was required to remove these files. With this update, when you uninstall glusterFS packages, it removes unmodified configuration files and preserves modified configuration files.
gluster-smb
- BZ#1025361
- Previously, if the
smb.conffile was modified or updated to a new Samba version package, two new backup filessmb.conf.rpmnewandsmb.conf.rpmsavewere created. With this update, only a single backupsmb.conffile is created after Samba package is updated. - BZ#1020850
- Previously, all Samba clients having access to a specific Red Hat Storage volume would log messages in a single log file. This led to difficulty in debugging issues. With this update, a log file is created by default for every Samba client. The clients internet name is appended to the log file for unique identification.
- BZ#1025489
- Previously, after a fresh install of Red Hat Storage, a backup file named
smb.conf.rpmsavewas created. With this update,smb.conf.rpmsavebackup file is not created. - BZ#1011313
- Previously, any application which used
gfapiand performed excessiveI/Ooperations, encountered an out of memory issue due to memory leak. With this update, this issue is fixed. - BZ#1040055
- Previously, fresh installations of Red Hat Storage 2.1 Update 1 did not have the
max protocol = SMB2configuration line in therhs-samba.confandsmb.confconfiguration files. This forced all the SMB clients to use the SMB1 version. With this update, Samba server includes SMB version 2 in the supported dialect list. - BZ#1028978
- Previously, after a Red Hat Storage update, multiple entries were seen in
smb.conffile. With this fix, when Red Hat Storage is updated multiple entries is not recorded in thesmb.conffile. - BZ#1051226
- Previously, when a file on a Samba share was edited, the time of last access (
atime) was wrongly set to a future date. With this update, the last access time of the file is displayed accurately. - BZ#1004794
- Previously, when applications linked with
glusterfs-apidisconnects from a volume, the file descriptor corresponding to the log file is not closed. As a result, many stale file descriptors were found. With this update, these file descriptors are closed when the application disconnects from the volume.
gluster-swift
- BZ#1046233
- Previously,
__init__.pyfile was not installed along withglusterfs-apipackage leading to a missing python module. It resulted in a failure to start the Object Store services. With this update, this issue is fixed. - BZ#1055575
- With this update, python-keystone client is upgraded from python-keystoneclient-0.2.3-5.el6ost.noarch to python-keystoneclient-0.4.1-3.el6ost.noarch and swift client package is upgraded from python-swiftclient-1.4.0-2.el6ost.noarch to python-swiftclient-1.8.0-1.el6ost.noarch
- BZ#1001033
- Enhancements have been made to the
gluster-swift-gen-buildersutility of Object Store by providing a man page and options to display outputs in verbose and non-verbose mode. - BZ#987841
- Previously, the
GETrequests on subdirectories would result in a traceback in Object Store. With this update,GETrequests work as expected. - BZ#901713
- Previously, a trace back message was observed when concurrent
DELETE RESTrequests were issued on the same file. With this update, concurrentDELETE RESTrequests work as expected.
glusterfs
- BZ#1034479
- Previously,
glusterdwas unresponsive if any of its peers did not respond to requests. It took 10 minutes forglusterdto detect the unavailability of the peer. With this update, new mechanism is added that detects if a peer is unreachable or unavailable withinping-timeoutseconds (default is 10s). Now,glusterdis not unresponsive for a noticeable period of time. - BZ#1045313
- Previously, using the deprecated FUSE mount option
fetch-attemptsled to a mount operation failure from third party applications. With this update,fetch-attemptsoption is recognized and the mount operation is successful and serves as a dummy option for backward compatibility. - BZ#1020331
- Previously, the
remove-brick statusdisplayed by thegluster volume statuscommand was inconsistent on different peers, whereas theremove-brick statuscommand displayed a consistent output. With this update, the status displayed by thevolume statuscommand and theremove-brick statuscommand is consistent across the cluster. - BZ#1046571
- gluster-api package has been modified to provide python-gluster package and a
glusternamespace. This modification brings up theglusternamespace in Python, which helpsgluster-swiftplugin to use it as a Python module. - BZ#1057291
- Previously, data loss was observed when one of the bricks in a replica pair goes offline, and a new file was created in the interim before the other brick was back online. When the first brick is available again before a self heal process happens on that directory of the brick and consequently if the second brick goes offline again and new files are created on the first brick, and it crashes at a certain point leaving the directory in a stale state although it has new data. When both the bricks in the replica pair are back online, the newly created data on the first brick is deleted leading to data loss. With this update, the data loss is not observed.
- BZ#972021
- Previously, read operations succeeded on a split-brained file when multiple clients tried to read the file simultaneously due to a race condition. With this update, file read operations report an I/O error.
- BZ#1037851
- Previously,
glusterdwould become unresponsive when it is disconnected from one of its peers while a glusterFS CLI command is in execution. With this update,glusterddoes not become unresponsive in such a scenario. - BZ#1024316
- Previously, when
remove-brickorrebalanceoperations were performed on a volume and subsequently if the volume was stopped and deleted,glusterdwould crash upon executing any gluster CLI commands. With this update,glusterddoes not crash when the same sequence of commands are executed on a volume. - BZ#1010204
- Previously, the XML output of
rebalance statuscommand did not contain the Rebalance skipped count statistic. With this update, a new tag skipped is introduced in the XML output to indicate the number of skipped files. - BZ#922792
- Previously, due to the simultaneous operation of
mkdir()andrmdir()on different mount points, or executingmkdir -pof the same path from different mount points, there was a possibility of a race condition that leading to discrepancy in layout structure for a directory. With this update, proper namespace locking is implemented in a distribute volume. - BZ#1032558
- Previously, when one of the bricks in a replica pair was offline, few files were not migrated from the decommissioned bricks resulting in some files missing. With this update, data is completely migrated even if one of the bricks in the replica pair is offline
- BZ#1028325
- Previously, when a node went down, glusterFS CLI would fail to retrieve the status of rebalance of all nodes on that cluster. With this update,
glusterdservice collects information from nodes that are online and ignores offline nodes. As a result, glusterFS CLI returns an XML output even if one or more nodes in a cluster are offline. - BZ#1034547
- Previously, when a mount process waits for more than 30 minutes (default timeout) for a response from a brick process, the mount operation would decline the request from the application. This event would generate log messages with information about the request that timed out. With this update, the information in the log messages is enhanced to provide the hostname of the brick process to help with meaningful analysis of log messages.
- BZ#1024228
- Previously, the XML output of the volume status command did not contain host UUID. Host UUIDs had to be manually found by looking into the output of the
gluster peer statuscommand output and match that with thegluster volume statusoutput. With this update, the respective host and brick UUIDs are added into the brick and NFS and SHD status XML output. - BZ#1035519
- Previously, if a Red Hat Storage volume was exported by a Samba client, and if the
smb.conffile had the optionea supportset toyes, then accessing that volume from a Microsoft Windows machine would fail with share is not accessible error. It could result in ansmbdprocess crash. With this update, any Red Hat Storage volume exported by a Samba client works as expected whenea supportoption is set toyes. - BZ#928784
- Previously,when the system underwent a hard reset or a system crash, there was a possibility for data corruption. With this update, the data operations are preserved and there is no possibility of data corruption.
- BZ#982104
- Previously, the
add-brickcommand would reset the rebalance status. As a result, therebalance statuscommand displayed a wrong status. With this update,rebalance statuscommand works as expected - BZ#1023921
- Previously, the rebalance process would start automatically when glusterd service was restarted. As a result, the
rebalance statuscommand would display an incorrect output. With this update, the rebalance process is started only if required and therebalance statuscommand works as expected. - BZ#1037274
- Previously, on a FUSE mount point, the size of the volume was displayed incorrectly if a brick went offline and subsequently came back online. With this update, the network reconnection issue is fixed.
- BZ#1032034
- Previously, when a cached subvolume was offline, if
rm -rfcommand was executed and subsequently files were created with the same filenames as before, duplicate files were seen when the cached subvolume was brought back online. With this update, creating files with the same file names is not allowed - BZ#955614
- Previously, aggregate status had to be found by looking at the individual node status. The XML output of
remove-brickandrebalance statuscommands did not contain the status of the tasks in theaggregatesection. With this update, anaggregatesection is added with the status of the task. - BZ#956655
- Previously, when you removed a brick a new graph was constructed, all the subsequent operations on the mount point would reflect on the new graph. However, the old graph was not cleaned-up because the transports were alive. If the peer disconnected on a transport that is alive, a reconnection attempt is made on a periodic basis. This would result in log messages. With this update, a notification is sent that the old graph is no longer in use. And on receiving the notification, the client closes the sockets
- BZ#1022328
- Previously, the auxiliary group limit was 128 and any group-permission-based file access operation was limited. If a user belonged to more than 128 groups then that information was lost and prevented access due to group permissions. With this update, file system object access failures do not occur due to group permissions up to 65536 auxiliary groups that the user is a part of.
- BZ#967071
- Enhancements have been made to provide read-only access to unprivileged users to query for the pathinfo information with the
glusterfs.pathinfokey. - BZ#1021807
- The virtual machine (VM) images are likely to be modified constantly. The VMs listed in the output of the volume heal command does not imply that the self-heal of the VM is incomplete. It could mean that the modifications on the VM are happening constantly
- BZ#1028995
- Previously, when brick process was terminated while
remove-brickcommand was in progress, the status of the remove-brick operation was displayed asstopped. With this update, the status is displayed appropriately. - BZ#1021351
- Bricks experience downtime when the load on self-heal process is high and the
gluster volume healcommand is executed periodically. During this time, data on the brick may be unavailable. A split-brain condition occurs when these two scenarios happen concurrently on a brick that runs out of memory. The self heal process is running on the same file or directory. - BZ#1040211
- Previously, if the Red Hat Storage volume was restarted, the ownership of the mount point reverted to root/root. With this update, a volume restart action does not change the ownership of the mount point.
- BZ#1024725
- Previously, status from a previous remove-brick or rebalance operation was not reset before starting a new remove-brick or rebalance operation. As a result, on those nodes which did not participate in an ongoing remove-brick operation, the
remove-brick statuscommand displayed the output of a previous rebalance operation. With this update, the status of the remove-brick or rebalance operations are set toNOT-STARTEDbefore starting remove-brick or rebalance operations again on all the nodes in the cluster. - BZ#1027699
- Previously, the
gluster volume statuscommand would fail on a node whenglusterdwas restated while remove brick operation was in progress. With this update, the command works as expected. - BZ#1019846
- Previously, the
rebalance statuscommand would also display peers that did not have any associated bricks. With this update, therebalance statuscommand works as expected. - BZ#1042830
- Previously,
glusterdservice would crash when aRebalanceoperation was started on a volume name that was 33 characters long. With this update,glusterdservice will not crash regardless of the size of volume name. - BZ#1028282
- Previously, by design, creation of bricks on a root partition was successful in script mode during volume creation. With this update, this operation is not allowed even in script mode and volume creation fails with an error message when bricks are created in the root partition.
- BZ#906747
- Previously, the
volume heal volname infocommand failed when the number of entries to be self-healed was high. This led to the self-heal daemon orglusterdto become unresponsive. With this update, thevolume heal volname infocommand works as expected. - BZ#1044923
- Previously, when a client disconnected and reconnected in quick succession there was a possibility of stale locks on the brick which could lead to hangs or failures during self-heal. With this update, this issue is now fixed.
- BZ#1019908
- Previously, the
replace-brickcommand would become unresponsive and enter a deadlock situation after sending commit acknowledgement to the other nodes in the cluster. This led to the CLI failure fromglusterdwithECONNREFUSED(146)error. With this update, thereplace-brickcommand does not become unresponsive and works as expected. - BZ#979376
- Previously, when one of the hosts in a cluster was restarted, the
remove-brick statuscommand displayed two entries for the same host. With this update, the command works as expected. - BZ#1010975
- Previously, the XML output of
remove-brickandrebalance statuscommands did not display host UUIDs of bricks in the node section. Host UUIDs had to be manually found by looking into the output of thegluster peer statuscommand output and match that with the volume status output. With this update, the XML outputs for rebalance andremove-brick statusdisplay the host UUID of each node. - BZ#1017466
- Previously, different computer clocks tick at slightly different rates, causing the time to drift. Most systems enable the NTP to periodically adjust the system clocks to keep them in sync with the
actualtime. The adjustment caused the clocks to suddenly jump forward (artificially inflate the timing numbers) or jump backwards (causing the timing calculations to go negative or hugely positive). In such cases, the timer thread could go into an infinite loop. With this update, the clock behaves as a simple counter that increments at a stable rate to calculate the interval timing. This is necessary to avoid the jumps which are caused by using thewall time. The internal timer thread is mademonotonic(), hence it can never tick backwards. - BZ#1045991
- Previously, if a brick was used by a volume, it was not possible to add that brick to any other volume even after the volume was deleted. As a result, the brick partition was rendered unusable. With this update, a
forceoption is added to thevolume createandadd-brickcommands. - BZ#1010966
- Enhancements were made to rebalance and remove-brick status CLI XML outputs to include the
runtimexml tag. - BZ#861097
- Previously on enabling or disabling a translator, a new graph was created and all the subsequent operations were displayed on the new graph. If sufficient time was not available for the completion of all operations in progress,
Transport End point not connectederrors occurred and a stale graph was displayed. With this update, these graphs work as expected. - BZ#923135
- Previously, using the
startoption with the remove brick command on a brick from a replica pair failed. Thestartoption triggers a rebalance operation which involves data migration and is redundant within a replica pair. With this update, reducing the replica count of the volume using thestartoption with theremove-brickcommand is not allowed. The preferred method for such an operation is to use force option. - BZ#1020995
- Previously, errors were logged in the self-heal daemon log periodically until some data was created on every brick during the rolling upgrade process. This is due to the absence of indices directory on some of the bricks. With this update, the indices directory is created on every brick as soon as it available online. As a result, the self heal daemon does not display
afr crawl failed for child 0 with ret -1error while performing a rolling upgrade. - BZ#970813
- Previously, when a new file was created, the
cluster.min-free-diskvolume option setting was not enforced. With this update, sub-volumes adhere to thecluster.min-free-disksetting. - BZ#994964
- Previously, the translators did not function normally without state built in inodes. For example, io-cache did not cache the file contents if the state was not present in the inode. With this update, the code in the FUSE-bridge ensures that at least one
LOOKUPoperation is done before proceeding with any I/O operations thus ensuring that the translators build the necessary state in the inodes. As a result, the log messages are not seen and translators function as intended. - BZ#1015395
- Previously, the XML output of the volume status command displayed only type and task id for replace-brick and remove-brick tasks. Bricks involved in specific tasks could not be easily identified. With this update, a new tag param is introduced as a child of task, in the XML output for volume status. The param tag contains the child elements srcBrick and dstBrick or brick to identify the bricks involved in a task.
- BZ#1005663
- Previously, upon a directory lookup, error logs were generated if
trusted.glusterfs.dhtextended attribute was missing. With this update, error logs are not generated for directories without layouts. - BZ#955611
- Previously, the status of the remove-brick and rebalance operations were not present as a string in the XML output as a result the status could not be easily identified in the XML output. With this update, a new element
statusStris added to the XML output containing a string which indicates the status of the task. - BZ#1006354
- Previously, files were not checked if they were in a split-brain state before proceeding with certain file operations. No error messages was displayed with these file operations. With this update, if a file is in split-brain state, the file operation returns I/O error (EIO) to the application.
- BZ#1019683
- Previously, if
glusterddaemon was restarted after adding a brick and subsequently removing a brick, thegluster volume infocommand reported a wrong brick count in the Number of Bricks field. With this update,gluster volume infocommand displays the correct brick count even whenglusterdis restarted. - BZ#1002403
- Previously, the XML output for
volume status allcommand produced multiple XML documents for volumes that were online and offline, one XML document for all the volumes that were online and an XML document each for every volume which was offline. With this update, the code ignores the offline volumes when displaying the XML output. - BZ#923809
- Previously, applications would become unresponsive on the mount point if bricks were not fully initialized. With this update, the bricks respond with appropriate error information.
- BZ#868715
- Previously, glusterFS would not check the
/proc/sys/net/ipv4/ip_local_reserved_portsfile before connecting to a port. With this update, theglusterdservice does not connect to port numbers mentioned in the/proc/sys/net/ipv4/ip_local_reserved_portsfile.
glusterfs-devel
- BZ#1021857
- Enhancements are made to gluster-api package to support object-based APIs that operate like POSIX *at variants (for example, man 2 openat).
- BZ#1030021
- With this enhancement, the
glfs_readdirandglfs_readdirplussyscall calls based on POSIX readdir syscall are added inlibgfapi. Earlier,libgfapihad only theglfs_readdir_rsyscall based on POSIXreaddir_r().
glusterfs-fuse
- BZ#1032359
- Previously on a FUSE mount, group ID (GID) caching was not disabled and operations failed due to a GID mismatch. With this update, cached GIDs are validated and file operations do not fail due to wrong GIDs. GID caching is disabled with the
gid-timeout=0option. - BZ#1023950
- Previously,
backupvolfile-servermount option was removed. With this fix, to provide backward compatibility, it is now supported. - BZ#1038908
- Previously, the
gluster volume heal volname fullcommand would perform a recursive directory crawl even when the destination brick is unavailable. This caused the crawl operation to become ineffective. With this fix, if the destination brick is unavailable, the crawl operation fails to start and an appropriate message is displayed.
glusterfs-geo-replication
- BZ#1046604
- Previously, setting the remote
xtimewould fail due to a Python backtrace. This made the geo-replication worker process to restart withfaultystatus. With this fix, Python exceptions is not raised when remotextimesetting fails and the geo-replication worker process works as expected. - BZ#1031687
- Previously, when the first
xsynccrawl was in progress, disconnection with the slave volume caused geo-replication to re-crawl the entire file system and generateXSYNC-CHANGELOGS. With this update,xsyncskips the directories which are already synced to the slave volume. - BZ#1002999
- Previously, metadata was not synced to the slave volume and this led to a geo-replication failure when the owner was changed and removed on the master volume by a new owner. With this update, the metadata changes related to
chmod()andchown()are processed and synced. As a result, geo-replication process successfully removes files on the slave volume through a new owner. - BZ#990331
- Previously, SSH failed on a hostname which had its Fully Qualified Domain Name (FQDN) longer than 45 characters. With this fix, geo-replication does not fail due to a lengthy FQDN.
glusterfs-server
- BZ#979861
- Previously,
glusterdwould listen on port 24007 for CLI requests and there was a possibility forglusterdto reject CLI requests from unprivileged ports (>1024) leading to CLI command execution failures. With this fix,glusterdlistens to CLI requests through a socket file (/var/run/glusterd.sock) preventing CLI command execution failures. - BZ#929036
- Previously the volume set/reset options such as
nfs.readdir-size,nfs.nlm,nfs.aclso on would restart the NFS server. With this fix, setting or resetting volume options does not lead to a restart. - BZ#1019064
- Previously the NFS server would crash under some circumstances. With this update, this issue is fixed.
- BZ#1020816
- Previously, the
Soft-limit exceededalert log messages were logged only in the brick log files making it difficult to check if the quota soft-limit had exceeded. With this fix, executing thegluster volume quota VOLNAME listcommand is sufficient to check if soft-limit and hard-limit have exceeded. - BZ#977492
- Previously, when there were multi-Gigabyte NFS writes (128 GB/server) to a volume, throughput slowed and eventually stopped. With this fix, the default volume parameter values is changed to
nfs.outstanding-rpc-limit=16. As a result the performance of the Red Hat Storage NFS for multi-Gigabyte file-writes has improved. - BZ#1016993
- Previously, there was no command to retrieve the status and the various parameters of async tasks for one more volumes. The
volume status allcommand was used and it additionally displayed the status, PIDs, port number of the bricks, self heal daemon and the NFS server. With this update, thetasksoption is introduced to thevolume statuscommand to retrieve status of async tasks for one or more volumes. - BZ#999569
- Previously when you performed an add-brick operation followed by a rebalanace operation, the Cinder volumes created on the Red Hat Storage volumes were rendered unusable. This issue was also observed with VMs managed with the Red Hat Enterprise Virtualization Manager. With this fix, the cinder volumes work as expected in such a scenario.
- BZ#904300
- An enhancement is made to
nfs.export-diroption to provide client authentication during sub-directory mount. Thenfs.export-dirandnfs.export-dirsoptions provide granular control to restrict or allow specific clients to mount a sub-directory. These clients can be authenticated with either an IP, or host name or a Classless Inter-Domain Routing (CIDR) range. - BZ#1032081
- With this update, to prevent occurrences of split-brain when a volume is used for storing virtual machine images, executing
gluster volume set VOLNAME group virt command enables both client and server-side quorum in the virt profile. - BZ#1025240
- With this update, the output message of the
remove-brickcommand has been enhanced and the following message is displayed: "Migration of data is not needed when reducing replica count. Use the 'force' option". - BZ#1027128
- Previously, using the
-hoption onquota-remove-xattr.shscript or usingquota-remove-xattr.shto display the disk quota usage would lead to the script execution. With this update, this issue is fixed. - BZ#922788
- Previously, excessive logs were generated. With this fix, the number of log messages is reduced. The log level can be set to
DEBUGto view all the messages. - BZ#923560
- Previously, the
remove-brickoperation supported removal of only one replica pair at a time. With this update, multiple replica pair removal is supported. If the bricks are from the same sub-volumes, removal is successful irrespective of the order of the bricks displayed on the CLI. - BZ#956977
- Previously, when ACL was
ON, theGETACLcall would invalidate the FSID causing adbenchoperation with 100 threads to fail. With the fix, theGETACLcall responds with a valid FSID and it works as expected. - BZ#881378
- Previously, mounting an
Export/NFSvolume using Red Hat Enterprise Virtualization Manager failed. With this fix, the iptable rules are set properly and the Red Hat Storage volume NFS mount operations work as expected. - BZ#904074
- With this enhancement, the in-memory cached list of all connected NFS clients is persisted. The list of connected NFS clients is stored in
/var/lib/glusterd/nfs/rmtab. Setting thenfs.mount-rmtaboption which points to a file on the shared storage for maintaining a list of NFS clients of the whole trusted pool. Earlier, restarting theglusterFS-NFSserver would empty this list (but NFS clients would still be connected). For more information about this issue refer to https://access.redhat.com/site/solutions/300963. - BZ#1001895
- Previously, the quota list command would give incorrect and inconsistent output is certain cases. With this update, the quota list command works consistently and as expected.
- BZ#864867
- Enhancements are made to provide an option configure the Access Control Lists (ACL) in the
glusterFS-NFSserver with thenfs.acloption.- To set
nfs.aclon, run the following command:gluster volume set VOLNAME nfs.acl on
- To set
nfs.acloff, run the following command:gluster volume set VOLNAME nfs.acl off
- BZ#851030
- With this update, an enhancement has been made to provide an option to disable IO-threads least priority queue.
- BZ#989362
- Previously, the log message for NFS did not have all the required information. With this update, information about the sub-volume name is added to the log message.
- BZ#1016608
- Previously, the default IO size for NFS was 64kb. With this update, the default value is increased to 1 MB. And the negotiated IO size by the clients is honored.
- BZ#916857
- Previously, a log message
No such file or directorywas recorded frequently at INFO level on the bricks. This issue existed for all the file operations that did not pass the path as an argument in the file operation. The server POSIX translator for bricks processes was always considered the path to be a non-null variable. If the file did not exist on the brick, the file operation failed with a ENOENT and the path variable would be NULL. With this fix, the log level is set to DEBUG as the brick processes. For more information on this issue, see https://access.redhat.com/site/solutions/516613 - BZ#1021776
- Previously, quota was enabled or disabled with the
gluster volume set VOLNAME quota on/offcommand. As a result, thevolume setcommand displayed asuccessmessage incorrectly on enabling or disabling quota. With this fix, this option is deprecated and a warning message is displayed with an appropriate message. - BZ#902644
- Previously, a virt file was not regenerated after deleting from
/var/lib/glusterd/groupsandglusterdservice was restarted. Re-installation of thegluster-serverRPM was required to regenerate the virt file. With this update,gluster volume set VOLNAME group virtcommand displays an error message if the virt file is invalid or deleted. - BZ#1083526
- Previously, when
glusterdon restart discovered that its volume configuration is stale, updates it to the latest using a peer's update. The bricks of the volume would be listening on different ports based on the stale configuration. This causes clients to connect to stale ports and results in writes not being performed on those bricks. With this release, the clients consults withglusterdfor the new ports of the bricks and the subsequent writes on the bricks is successful.
redhat-access-plugin-rhsc
- BZ#1022338
- Previously,
rhsc-setupcommand failed and an error message was displayed that theredhatsupportplugincould not be installed. Now, with this update, the import path forredhatsupportpluginis updated and therhsc-setupcommand runs successfully. - BZ#1030248
- Previously, while creating a support case using
red-hat-access-plugin-rhsccommand, generating Red Hat Storage Console SOS report was failing. Now, generating Red Hat Storage Console SOS report is successful. - BZ#1007751
- Previously, during the installation of
rhsc-setupRPM, benign warnings were displayed. Now, with this update, the warning is not displayed.
redhat-storage-server
- BZ#1038475
- Enhancements are made to add the
gluster-deploytool as an unsupported tool in Red Hat Storage Server. This tool eases the first time setup of the Red Hat Storage node and reduces the steps that need to be run manually. - BZ#1020291
- Previously, while adding nodes, warning message about network
virbr0that were not relevant to Red Hat Storage Console was displayed. Now, with this update, the warning message is not displayed. - BZ#984691
- Enhancements have been made to provide a
One-RPM-for-RHS-Installationmethod to facilitate accurate non-ISO installations of the Red Hat Storage server. As a result, there will be no difference in RPM packages, configuration settings, and branding settings between non-ISO based installs and ISO based installations of the Red Hat Storage server.
rhsc
- BZ#999795
- Previously, the Gluster Hooks list was not displayed if some of the hooks directories (post/pre) were missing in the Red Hat Storage nodes. With this update, the Gluster Hooks list is displayed.
- BZ#975414
- Previously, the Feedback link displayed on the administration portal directed to Red Hat Enterprise Virtualization-Beta Product Page instead of Red Hat Storage Product page. With this update, the Feedback link is removed.
- BZ#979177
- An enhancement is made to display the Red Hat Storage version in the General sub-tab of Hosts tab in Red Hat Storage Console.
- BZ#1044511
- Previously, the Cluster Compatibility Version 3.1, which is not supported was displayed in the Red Hat Storage Console UI. Now, only the supported cluster compatibility versions is displayed.
- BZ#983716
- Previously, removing a server from cluster failed and an error message was displayed. With this update, the server is removed successfully and an appropriate message is displayed.
- BZ#1043032
- Previously, newly added servers were removed from the database by sync job as the new server was not found in the cluster. This was due to
gluster peer statusbeing invoked on a different server. Now, with this update, the servers arenot removed from the sync job when they are added to an existing cluster. - BZ#1041565
- Previously, non-relevant Red Hat Storage Console query was displayed during Red Hat Storage Console setup. Now, with this update, only queries relevant to Red Hat Storage Console is displayed during setup.
- BZ#1019570
- Previously, fields relevant to virtualization were displayed in the Red Hat Storage Console. Now, with this update, fields relevant to virtualization is removed and fields relevant to Red Hat Storage Console is displayed.
- BZ#1040923
- Previously, the list of locales which are not supported were displayed in Red Hat Storage Console. Now, with this update, only the supported locales is displayed.
- BZ#972619
- Previously, while adding hosts, the newly added hosts were removed from the database automatically. Now, with this update, the user can add/move hosts successfully.
- BZ#970581
- Previously, the
Option Keydrop-down list in Add Option window of Volume Options was getting collapsed automatically. Now, theOption Keydrop-down list is displayed correctly. - BZ#1002416
- Enhancements have been made to add Up/Down icons to display the status of volumes/bricks instead of providing the status as text.
- BZ#1036639
- Previously, the Welcome page of Red Hat Storage Console Administration portal was not displayed and an error page was displayed. Now, with this update, the Welcome page is displayed.
- BZ#1008942
- Enhancements have been made to add
Allow bricksin root partition and reuse existing bricks by clearingxattrs'checkbox in the Add Bricks window to enable the user to create brick directory in root partition. - BZ#1021659
- Previously, the ports 8080 (for Swift service) and 38469 (for NFS ACL support) were not listed in the firewall setting list of Red Hat Storage Console as these were overwritten in the firewall setting of the Red Hat Storage node after it was added in the Red Hat Storage Console. Now, with this update, the Firewall Setting is configured to open the 8080 and 38469 ports.
- BZ#1021773
- Previously, if import host was triggered for other peers of an already added host, that host name was getting listed in the Import Hosts window and the import hosts operation used to hang. Now, with this update, the import hosts is successful.
- BZ#1031585
- Enhancements have been made to allow the removal of more than a pair of bricks from a distributed-replicate volume.
- BZ#1023034
- Previously, the information on the recently performed operations was not displayed correctly in the Tasks tab. With this update, the recently performed operations are displayed correctly.
- BZ#1015478
- Previously, users with volume manipulate permission were unable to delete a volume. Now, with this update, a checkbox has been added which enables delete permission under Volume category to provide the volume deletion permission.
- BZ#1024263
- Previously, resolving hook conflicts by copying the hook to all the servers was throwing an error as correct hooks directory was not created. With this update, changes are made to create proper path for hooks directory.
- BZ#1024377
- Enhancements have been made to allow users with
GlusterAdminpermission to add, edit, or remove clusters and hosts. - BZ#1024592
- Previously, the title of Brick Advanced Details window was Brick Details and the Mount Options field was displayed before File System field. With this update, they have been modified.
- BZ#1012278
- Previously, non-relevant Red Hat Storage Console links were displayed on the Home page. Now, with this update, only the links that are relevant to Red Hat Storage Console is displayed.
- BZ#1024736
- Previously, Error pop ups did not have title. Now, with this update,
Problem Occurredhas been added as a title for Error pop ups. - BZ#1024997
- Previously, an error was displayed while removing multiple hosts belonging to the same cluster. Now, with this update, removal of multiple hosts is allowed.
- BZ#1015013
- Previously, an error was displayed whenever there was data overflow in few fields of Advanced Details window. Now, with this update, the data type for those fields are modified to avoid the data overflow.
- BZ#1012871
- Previously, the Bricks tab remained in the loading state for long time and the bricks were not displayed. With this update, the User Interface rendering logic has been modified and the Bricks tab loads correctly with the bricks.
- BZ#880736
- Previously, the link to Administration Guide from the Red Hat Storage Console directed to error page. With this update, this issue has been fixed as the documents are packaged with Red Hat Storage Console.
- BZ#1026882
- Previously, the payload details were not displayed in the RSDL listing. Now, with this update, the URL patterns in the RSDL listing are corrected and the payload details are displayed.
- BZ#955464
- Previously, Data Center field which is not relevant to Red Hat Storage Console was displayed in the Clusters tab. Now, with this update, the Data Center field is removed.
- BZ#1027178
- Previously, the spacing for main and sub tabs of volume was not equal and the items in the main tab were hidden. Now, with this update, the spacing issue is fixed to display the fields correctly.
- BZ#1027583
- Previously,
Sync Mom Policyoption which is not relevant to Red Hat Storage Console was displayed in the Red Hat Storage Console. With this update, this option is removed. - BZ#1025869
- Previously, when volume creation takes more than 2 mins, an error message is incorrectly displayed. With this update, the error message is not displayed for long operations.
- BZ#885592
- Previously, the self heal daemon service status details was not displayed in Services sub tab of Clusters. With this update, the self heal daemon service details are displayed.
- BZ#1024649
- Previously, the bricks displayed were not sorted in the Bricks sub-tab of the Hosts tab. With this update, the bricks are sorted and displayed.
- BZ#1015020
- Previously, fields which are not relevant to Red Hat Storage Console were displayed in the Add Network window. Now, with this update, a check is added to verify the application mode and the fields which are not applicable to Red Hat Storage Console are not displayed.
- BZ#1029648
- Enhancements have been made to change the term
GlustertoVolumesin the Edit Role window. - BZ#1012241
- Previously, permissions which are not relevant to Red Hat Storage Console were listed for Users. Now, with this update, the permissions are displayed based on the application mode of installation and only the permissions relevant to Red Hat Storage Console are displayed.
- BZ#850429
- Previously, the Red Hat Storage Console REST API was displaying irrelevant content. Now, only the content relevant to Red Hat Storage Console is displayed.
- BZ#886478
- Previously, validation of the number of bricks required to create a distributed-stripe volume was not correct. Now, with this update, validation is added to check the number of bricks added and an error message is displayed if the number of bricks is not correct.
- BZ#1018076
- Previously, an error message was displayed during Gluster Syn and the operation failed. With this update, the issue is fixed to allow Gluster Sync and all the hooks are listed.
- BZ#1021982
- Previously, the warning displayed during Remove Brick operation had irrelevant characters. Now, with this update, the Remove Brick warning text error is modified.
- BZ#850422
- An enhancement is made to enable users to manage and monitor the gluster asynchronous operations like Rebalance and Remove Brick.
- BZ#907462
- Previously, clicking the Refresh icon in the Red Hat Storage Console did not refresh the tree view. Now, clicking the Refresh icon refreshes the tree view.
- BZ#1011775
- Previously, a prompt to select application mode was displayed while running
rhsc-setupcommand. With this update, the prompt for selecting application mode is removed fromrhsc-setupand application mode is set togluster. - BZ#916117
- Previously, while editing the volume option, the existing value was not displayed in the event log message. With this update, the existing value is displayed.
- BZ#1021441
- Previously, when the host was down, the status of bricks in that host were displayed as Up. Now, the correct status is displayed.
- BZ#918683
- Previously, adding a new logical network to a cluster displayed an error. This issue was due to the role not being present in Red Hat Storage Console. Now, with this update, changes have been made to roles to allow logical network creation.
- BZ#927753
- Previously, the
rhsc-setupcommand failed to retrieve engine database password when run for the second time without runningrhsc-cleanupcommand. Now, with this update, no error is displayed when the command is executed second time. - BZ#1008675
- Previously, creation of distributed-stripe volume with 8 bricks was identified and displayed as
stripevolume type in Volumes tab and General sub-tab of Volumes tab. Now, the volume types are identified and displayed correctly. - BZ#1004141
- Previously, non-gluster roles were required to be removed explicitly from roles table for branding purpose. Now, with this update, the roles are listed based on the application mode and the roles are not removed.
- BZ#1002661
- Previously, incorrect version details were displayed in the API listing. Now, with this update, the correct version details are set to display in the API listing.
- BZ#955488
- Previously, the terms Hosts and Servers were used interchangeably in the Red Hat Storage Console. Now, with this update, all the references of 'Servers' is changed to 'Hosts'.
- BZ#958813
- Previously, users were not able to know the brick status change as the event messages were not displayed. Now, with this update, event messages are added for brick status change.
- BZ#992912
- Previously, roles that were not relevant to Red Hat Storage Console was listed while adding/editing roles. Now, with this update, the roles are displayed based on the application mode of installation and only the roles relevant to Red Hat Storage Console is displayed.
- BZ#992899
- Previously, adding a host was allowed even if the glusterfs generated UUID of the host is the same as of an existing host. But, the peer status on both the hosts with same UUID was displayed was 0. Now, with this update, an error message displayed while adding host with same UUID.
- BZ#987494
- Previously, an error message was displayed while removing hosts in maintenance mode and the removal was not allowed. Now, with this update, the removal of hosts in maintenance mode is allowed.
- BZ#973091
- Previously, binary hook contents were displayed during Resolve Conflicts. Now, with this update, the binary hook content is not displayed in the Resolve Conflicts window.
- BZ#1020691
- Previously, brick status was displayed as
Downeven though the volume wasUpduring Import Cluster. Now, with this update, the brick status is displayed correctly. - BZ#1040960
- Previously, configurations that were not relevant to Red Hat Storage Console was displayed during Red Hat Storage Console setup. Now, with this update, the settings are modified to display the queries correctly.
- BZ#987427
- Previously, a wrong event message was displayed while adding a brick to volume. Now, with this update, the event log message is corrected.
- BZ#986171
- Previously, the list of hooks from the removed hosts were displayed in the Gluster Hooks tab. Now, with this update, the hook list is cleared when the hosts are removed from the cluster.
- BZ#983051
- Previously, the Import Cluster operation failed when one of the hosts was unreachable. Now, with this update, the Import Cluster operation is successful as the unreachable host entry is removed from the list.
- BZ#982540
- Previously, users were not able to subscribe for Host Event Notifications and Volume Event Notifications. Now, with this update, database entries are added for Host Events and Volume Events to enable notifications.
- BZ#980750
- Previously, when a non-operational host was moved to Maintenance mode, it was removed from the cluster. Now, with this update, the removal of host in Maintenance mode is disabled.
- BZ#975805
- Previously, while expanding the View Hook Content window size, the content exceeded the dialog boundaries. Now, with this update, the option to expand the window is disabled and the content is displayed within the dialog boundaries.
- BZ#1020190
- Previously, the NFS Setup and Datacenter Storage Type fields, that were not relevant to Red Hat Storage Console was displayed under configuration preview during setup. Now, with this update, only fields relevant to Red Hat Storage Console is displayed.
- BZ#975382
- Previously, when
glusterdservice was not running on a host, operations were allowed from Console though it fails on hosts and the status of such hosts was displayed as Up. Now, with this update, the status of glusterd service is checked and the host status is displayed as Non-Operational if the service is not running. - BZ#975055
- Previously, while creating a new cluster, the lower cluster compatibility version was selected in the New Cluster window. Now, with this update, the highest compatibility version is selected by default.
- BZ#974023
- Previously, an error was displayed while running
rhsc-check-updatecommand to check for updates. Now, with this update, therhsc-check-updateis obsoleted byengine-upgrade-checkcommand to check for updates. Users with older version of Red Hat Storage Console must update the package to get the new packages ofengine-upgrade-check. - BZ#974018
- Previously, an error was displayed while running the
rhsc-upgradecommand. Now, with this update, therhsc-upgradecommand is obsoleted byrhsc-setupcommand and therhsc-setupcommand performs the upgrade task also. Users with older version of Red Hat Storage Console must update the package to get the new packages ofrhsc-setup. - BZ#973638
- Previously, the Add Event Notification pop up had options which are not relevant to Red Hat Storage Console. Now, with this update, only options that are relevant to Red Hat Storage Console is displayed.
- BZ#1020187
- Previously, password was saved as plain text in the answer file generated during
rhsc-setup. Now, with this update, the answer file is made accessible only to the root user. - BZ#1021397
- Previously, CPU Name field, which is not relevant to Red Hat Storage Console was displayed in the General sub tab of Hosts tab. Now, with this update, the CPU Name field is removed.
- BZ#1031899
- Previously, the PowerUserRole field was displayed while querying for list of system permissions. Now, with this update, the query is modified to get correct values and PowerUserRole is not displayed.
- BZ#1031901
- Previously, while adding or removing a role from System Permissions through Configure option, event log message with a repeated word was displayed. Now, with this update, the correct message is displayed.
- BZ#1036035
- Previously, event log message in Manage Events had spelling error. Now, with this update, appropriate event log message is displayed.
- BZ#914667
- Previously, when creating volumes with restricted names, correct error message was not displayed from the gluster CLI. Now, appropriate error message is displayed.
rhsc-cli
- BZ#1032221
- Previously, the help command output in
rhsc-shelldisplayed irrelevant text. Now, with this update, the correct text is displayed while running help command in rhsc-shell. - BZ#972589
- Previously, show even command in
rhsc-shelldisplayed an error. Now, with this update, the show event command displays the event information correctly. - BZ#972581
- Previously,
list events --show-allcommand inrhsc-shelldisplayed an error. Now, with this update, the events are listed correctly.
rhsc-log-collector
- BZ#929047
- Previously, the
engine-log-collectorcommand to collect data from hypervisors/servers failed. Now, running theengine-log-collectorcommand completes successfully. - BZ#1018123
- Previously,
rhsc-log-collectorcommand to collect data from hypervisors/servers failed. Now, with this update, therhsc-log-collectorcommand completes successfully collecting logs from hypervisors/servers.
rhsc-sdk
- BZ#1033772
- Previously, resolving hook content conflict by copying hook content from another host failed. Now, the correct hook content is copied and resolving hook content conflict is successful.
- BZ#1018904
- Previously, adding bricks was successful, but an error message was displayed. Now, the error message is not displayed.
- BZ#1038686
- Previously, the accepted format for the action parameter for the glusterhook resolve action was incorrect. Now, with this update, the format of the action parameter is corrected.
- BZ#1037709
- Previously, the step type value for rebalance volume and remove bricks were displayed as UNKNOWN in API listing. Now, the correct step type is displayed.
vdsm
- BZ#1013611
- Previously, the default gateway was removed after adding the host and the node was not able to reach outside of network due to missing gateway. Now, after adding host, default gateway is set properly and the host is able to reach outside of network.