Chapter 3. RHBA-2013:15719

The bugs contained in this chapter are addressed by advisory RHBA-2013:15719. Further information about this advisory is available at http://rhn.redhat.com/errata/RHBA-2013-1769.html.

gluster-smb

BZ#1012711
Previously, while upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1, new settings were only stored in smb.conf.rpmnew and not applied. The new settings mentioned in smb.conf.rpmnew are critical and if they are not applied leads to poor performance of Red Hat Storage volume accessed using Samba. Now, all Red Hat Storage specific Samba settings are added to a secondary configuration file which is automatically updated on every upgrade. This secondary configuration file overrides any settings that were defined in primary smb.conf file. With this update, upgrades will not require any manual intervention to do a manual merge of existing and new recommended settings for Samba every time a upgrade is done.
BZ#858434
Previously, when a volume was started or stopped, Samba hook scripts modified volume share entries in Samba configuration file. When user force starts a volume and it is in the started state, a second entry is added to the configuration file without checking for existing entry. This led to duplicate entries which led to configuration inconsistencies. Now, with this fix a check for existing entries is done before an entry is added to the Samba configuration file.

glusterfs

BZ#1021808
Previously, when an application attempts parallel file operations using multiple threads on a volume, using the libgfapi interface, a change in volume configuration, such as volume set, resulted in file operations to hang. Now, this issue has been fixed.
BZ#1013556
Previously, the help option with Quota did not display appropriate message. Executing a quota command with an invalid option would not have a clear output. Now, with this fix an error message is displayed indicating that the option is invalid while executing the the quota command.
BZ#906117
Previously, the NFS share utilization was displayed incorrectly after enabling quota. Now, this issue is fixed with this update.
BZ#1011905
Previously, glusterd did not perform any validation of values on a quota enabled volume. Now, glusterd validates the values and only valid percentages can be set for default-soft-limit quota option.
BZ#1001489
Previously, when volume quota list command was executed on a quota enabled volume, with quota limits configured on directories, the CLI may crash. Now, in this update, CLI does not crash when volume quota list command is executed on the same volume.
BZ#979641
Previously, the option for backupvolfile-server was not fetching data from back up server when the main server was down. Now, with this update this issue is fixed and the option fetches data.
BZ#981035
Previously, showmount command executed on NFS client timed out due internal parsing error. With this update, the internal parsing is handled properly and the response is checked before sending to the client, which fixes the time-out issue.
BZ#826732
A logging enhancement is made to enable quota-deem-statfs option from the volume set command which results in displaying the quota limits and sizes when df command is executed instead of the back-end disk sizes.
BZ#978033
Previously, quota limits could be set on a directory that did not exist in the volume. Now, in this update, quota limits cannot be set if the directory does not exist.
BZ#956276
Previously, after adding a brick the DHT (Distribute Internal Layout) was re-written without triggering rebalance. Now, with this update adding bricks does not trigger layout re-write and solves the problem of confusing logs.
BZ#961704
Previously, after rebalance, checksum match of the volume failed with transport endpoint not connected error. Now, with this update, inode refresh logic is corrected and the problem with checksum match is resolved.
BZ#950314
Previously, after adding a brick the DHT (Distribute Internal Layout) used to be re-written without triggering rebalance. Now, with this update adding bricks does not trigger layout re-write.
BZ#1002885
Previously, the quota limit configuration stored in the extended attributes of the directory would not be present in the newly added brick. The quota list command may fail to list the limits on the directory when only the newly added bricks are available. Now, with this update, the configuration is copied on the newly added bricks and the quota list command does not fail.
BZ#853246
Previously, on a quota enabled volume, the quota system prevented writes even before disk space consumption reached the hard limit set on the directory. Now, with this update, writes will be prevented only when the hard limit is reached on the directory.
BZ#852140
Previously, when a volume was started or stopped, Samba hook scripts modified the volume share entries in the Samba configuration file. When user force starts a volume and it is in the started state, a second entry is added to the configuration file without checking for existing entry. This led to duplicate entires which resulted in configuration inconsistencies. Now, with this update, a check for existing entires is done before an entry is added to the Samba configuration file.
BZ#871015
Previously, moving files between two directories of which at least one of the directory configured with a quota limit, lead to directory sizes not being updated suitably after the move operation. This behavior was observed on a distribute volume. The occurrence of the issue depends on the hash values of the file names. Now, with this update this issue has been fixed.
BZ#1012900
Previously, volume quota limit-usage command would not validate very large values (> ~18500PB), which could result in hard limit being set to zero. Now, in this update, limit-usage command would perform validation on values supplied to it, disallowing very large values.

glusterfs-fuse

BZ#980778
Previously, on Red Hat Enterprise Linux 5 clients fuse mount options provided were not handled properly leading read-only mount to fail. Now, with this update mounting a volume as read-only on Red Hat Enterprise Linux 5 clients is successful.

glusterfs-geo-replication

BZ#987292
Previously, when replica pair encountered a failover in geo-replication it processed already synced data resulting in redundant data transfer and delay. With this enhancement update, this behavior is fixed and the performance is enhanced.
BZ#1019930
Previously, the geo-replication syncing daemon was restarted upon any option changes in configuration. This caused extra processing time as it fell back to initial xsync crawl instead of changelog processing after restart. With this update, the checkpoint option changes are made real-time and the gsyncd daemon is not restarted.
BZ#1022518
Previously, if a node in master cluster of geo-replication was rebooted, it would cause the session to become unresponsive, causing the whole setup not to work properly. With this update, the issue is handled properly, so now, the node reboots do not cause any unresponsive processes.
BZ#1025967
A new option has been added to the Geo-replication feature of Red Hat Storage which enables to choose tar over ssh as a sync engine. Executing gluster volume geo-replication masterslave-host::slave config use-tarssh true command triggers this option.
BZ#1022830
Previously, in geo-replication session, if the status file was empty for any reason, the gsyncd process would stop with a backtrace, leading to faulty state. With this update, all failures to open/read status file is handled, leading the process to run smoothly during restarts.
BZ#1023124
Previously, while syncing the changes from master cluster to slave cluster, using geo-replication, the processing of the changes used to take time due to extra stat() calls made to figure out type of the file created, mode of file, owner/group of file. The stat() call was also made on a mount point, which caused further performance degradation. With this update, the changelog entries itself contains all the details required for creating/processing the file, hence no stat() calls are made on mount point, making the processing of changelog entries significantly faster.
BZ#1000948
Previously, when a Geo-replication session was started there were tens of millions of files on the master volume which took long time to observe the updates on the slave mount point. Now, with this update this has been fixed.
BZ#1019943
Previously, files were not logged when they were skipped due to the maximum number of retries. Now, with this update, logs of such incidents are recorded .
BZ#998943
Previously, the geo-replication log-rotate utility rotated the existing log files but did not open a new file for writing further logs and led to loss of further logs. With this update, the geo-replication log-rotate module creates a new file after rotating existing files and no log are lost.
BZ#1027137
Previously, if a node gets rebooted while checkpoint is set on the geo-replication session, it became unresponsive. With this update, the geo-replication process handles all the failure cases properly.
BZ#1019954
Previously, if there were any files missed getting synced to slave cluster in geo-replication due to some error, there was no easy way to unearth the error without digging deep into log files. With this enhancement update, the geo-replication status detail command has a new column for skipped entries, which if not 0, means some files failed to sync.
BZ#980910
Previously, the geo-replication daemon never synced metadata changes of the file to the slave side, when metadata changes are done after files are created. With this update, geo-replication module syncs all the metadata changes of the file properly to slave.
BZ#987272
Previously, the geo-replication's checkpoint now option used to set the UTC time in the status, where as the CLI output displayed human readable local time. Now, with this update release, CLI sets the epoch value and while displaying data just logs everything in local time, solving dependency on UTC time.
BZ#1019522
Previously, the geo-replication status CLI displayed status of each node involved, and did not display the per brick connection status. With this, it was harder to figure out to which node a connection was made on the slave side. With this update, the geo-replication status is enhanced to display per process status instead of per node.
BZ#1019515
Previously, when a checkpoint was set on geo-replication session, it used to be never get completed in one of the replica pair, and the geo-replication status command used to display checkpoint as of date_time not reached yet for that particular node. With this update, the checkpoint status for the replica pair is not displayed.
BZ#984603
Previously, the logic of geo-replication's initial xsync to fetch the master cluster's file details failed to capture hardlinks properly. With this update, hardlinks are fully supported by geo-replication.
BZ#1004235
With new enhancements for better performance and distribution in geo-replication feature, the logic of syncing files from master cluster to slave has been modified. Due to this, after upgrade previously set geo-replication session failed to sync files to slave. Now, an upgrade script has been included in this update. Running this script after upgrade would ensure the sync is handled properly.
BZ#1007536
Previously, the buffer space considered for checking slave clusters total size was not accurate. Now, with this update the buffer size is increased and the calculation is accurate.
BZ#1000922
Previously, the input for log-level configuration option of geo-replication was not validated. Now, an error message is displayed when user attempts to give invalid options.

glusterfs-server

BZ#1017007
Previously, due to locking issues in glusterd management daemon, the daemon crashed when two volume operations were executed simultaneously on the same node. Now, with this update this issue has been fixed.
BZ#1003736
Previously, when the permissions of the directory was changed to a non-root user and root-squash option was enabled, setting quota limit for the directory failed. Now, with the changes made to the root-squash algorithm in this update, setting quota limits on a directory with a non-root owner is successful.
BZ#998793
Previously, for a quota enabled volume the auxiliary mount would be mounted at /tmp/volname. Now, with this update, the auxiliary mount is mounted at /var/run/gluster/volname.
BZ#1002613
Previously, when the bricks tried to connect to the quota daemon even when quota was disabled resulted in connection failure messages in the brick logs. Now, with this update, bricks do not connect to the quota daemon when quota is disabled.
BZ#1011694
Previously, the glusterd management daemon used to save the global options (such as NFS) in any one of the volume's configuration store while the in-memory values are not in sync with the store. This led to peer rejected state after upgrade to Red Hat Storage 2.1. With this update, the global options are saved in every volume's configuration store and is in sync with the in-memory values and the peer rejected state is not observed.
BZ#999269
Previously, alert logs were missing when disk consumption of a directory, on which quota limits were set on a quota enabled volume, had crossed the soft-limit. Now, with this update, alert logs are seen when the disk consumption goes above the soft limit set on the directory.
BZ#917203
Previously, if a global option, for example, nfs.port was set, when one or more nodes were down, then those nodes would go to Peer Rejected state, when brought up. Now, in this update, the peers would not descend into peer rejected state, when the same operations are carried out.
BZ#1003549
Previously, when deem-statfs option is set on a quota enabled volume, df reports incorrect size values. Now, with this update, when deem-statfs option is set on, df reports the size as hard-limit configured on the directory.
BZ#1009851
Previously, quota list command would return non-zero exit code when the command was successful. Now with this update, exit code would be zero when quota list command is successful.
BZ#1016019
Quota stores xattrs for its accounting and enforcement logic. The xattrs should be cleared after disabling the quota. Since the xattrs are not cleared, the stale data causes quota limit violation only after subsequently enabling quota.
BZ#1016478
Previously, the disk usage accounting by quota would go wrong, when rename(3) system call was performed on a volume, with quota enabled and quota limits configured on them. Now, with this update, the disk usage accounting would reflect the disk space used by the volume correctly, even when rename(3) system call was performed by applications running on the volume.
BZ#1003759
Previously, setting limits on a single brick volume or on a volume where all bricks reside on the same node in a cluster with more than one node, failed due to the incorrect aggregation logic for quota sub command. Now, with this update setting limits is successful.
BZ#982629
Previously, when data was being written into a quota enabled volume, using dd(1), the dd utility may hang on the write(3) system call. This can be confirmed, by using ps(1), that the process corresponding to the dd utility writing onto the volume, is in 'D' state. Now, with this update, we do not see this issue.
BZ#985783
Previously, when the quota feature was enabled, the layout handling in DHT resulted in some errors for root inode. With this update, the interaction between quota and DHT has been modified to overcome this issue.
BZ#1019518
Previously, an getxattr() call on virtual directory used by geo-replication was logged as an error in glusterfs log files causing excessive logs. With this update, the virtual directory getxattr() failures are not treated as errors.
BZ#1002496
Previously, when a directory, on which quota limits are set, is deleted from the fuse mount and recreated with the same absolute path leads to quota list command failure due to brick processes crashing. Now, with this update this issue has been fixed.
BZ#1019903
Previously, directory with files could be moved into a directory whose quota limit is full. Now, this issue has been fixed with this update.
BZ#986948
Previously, setting quota limits on symlinks was possible to directories but fail to enforce quota. Now, in this update, setting quota limits on symlinks is not possible.
BZ#1020886
Previously, on a quota enabled volume, with quota-deem-statfs option being set to on, the disk usage statistics reported by df and volume quota list command, on a directory on which quota limit is configured, would differ. Now, with this update, the disk usage statistics reported by df and volume quota list are identical.
BZ#1001556
Previously, volume quota list command would report different sizes in the Used column of the output, on successive runs when hard limit was reached on a directory. This was seen on a plain replicate or distributed replicate type volumes. The quota disk usage can be different in bricks belonging to the same replica set, until the time disk usage accounting converges to reflect the actual disk consumption. In this interval, it was possible to see the difference in volume quota list command as mentioned earlier. Now, with this update, the volume quota list command reports the maximum disk usage as seen across bricks in a replica set ensuring consistent outputs.
BZ#978704
Previously, volume status command did not display the status of quota daemon. Now with this update, the volume status command displays the quota daemon status.
BZ#1000996
Alert logs were being logged identifying with the files on which the writes were happening as opposed to being identified with the corresponding directory on which quota limit was set. Now, in this update, alert logs would be logged identifying with the directory on which quota limit is set, while file operations are being performed on files inside it.
BZ#998786
Previously, the trusted.glusterfs.quota.size extended attribute was not created on the / mount due to which the extended attribute which holds the size was not created. This lead to wrong size display in the CLI. Now, with this fix the correct size is displayed.
BZ#1025333
Previously, when data is present in the volume before the quota feature is enabled and quota limits are set such that the data already present consumes disk space greater than the limit, df utility would report negative values for Used and Available columns. Now, with this update, df utility reports properly with data populated before quota feature is enabled.
BZ#1000903
Previously, volume reset command failed when there were protected options set on the volume. Quota is a protected field and the reset command would fail when quota is enabled on the volume. Now, with this update, volume reset command does not fail without force option even when the protected fields are set. A message is displayed indicating that force option should be used to reset the protected fields.
BZ#1000302
Previously, volume quota list command did not display all directories on which quota limits were set. The change in the algorithm involved in fetching quota configuration, associated with a directory which fixed the issue. Now, in this update, volume quota list command displays all directories on which quota limits are set.
BZ#988275
Previously, split brain related messages were observed in nfs.log when listing the contents of a directory in a quota enabled volume. Now, with this update, there are no split brain related messages seen in the nfs.log file.
BZ#988957
Previously, on a quota enabled volume, with deem-statfs option set to on, df utility would not report the total disk space available as the hard limit configured on the directory. Now, with this update, the df command output reports the total disk space available as expected for the above mentioned configuration.
BZ#989758
Previously, the values in the Used and Available columns of volume quota list command would fluctuate, even when data is only being written on a quota enabled volume and not removed. Now, with this update, volume quota command output does not fluctuate when data is only being added to the volume.
BZ#1027525
Previously, volume quota list command would return different disk usage values for the same directory on which quota limits were set, when invoked with a directory path argument and without any argument supplied. Now, with this update, both the invocations of the volume quota list command would return the same disk usage values for a given directory.
BZ#1005460
Previously, when deem-statfs option is set on a quota enabled volume with no data, df command output did not display disk usage statistics for the mount of the volume. Now, with this update this issue has been fixed and df command output reports correct statistics on a volume with no data.
BZ#998904
Previously, on a quota enabled volume, renames performed on a directory with quota limits set may fail due to an accounting logic error. Now with this update, the accounting logic handles rename operations correctly and the rename operations work as expected.
BZ#998797
Previously, quota list command displayed quota configuration when quota was disabled on a volume and the quota configuration file was not deleted on disabling quota. Now with this update, the quota list command does not display any quota information, when quota is disabled on the volume.usterd to fail quota list command, if quota is not enabled on the volume.
BZ#1005465
Previously, quota options such as default-soft-limit, hard-timeout, soft-timeout and alert-time were not validated by glusterd. This resulted in invalid values in the brick's volume file and restarting bricks failed due to validation failure of the option's value. Now, with this update, glusterd validates the values and rejects any values falling out of range. Also, the CLI now displays the allowed range of values.
BZ#924687
Previously, NFS client returned the entire access field irrespective of what the client has requested. Some clients, such as AIX, displayed Permission Denied error in such cases. Now, NFS client returns only the requested access bits and the client displays no error message.
BZ#876461
Previously, it was not possible to map the path to the inode when the operations were done through NFS protocol. Now, an enhancement has been added to the Quota feature and it is now possible to get the correct mapping between inode and the path which solves the problem of quota limit enforcement on an NFS mount.
BZ#1019504
Previously, glusterd management daemon was unresponsive after executing gluster volume log rotate help command due to internal lock management issues. Now, with this update internal locks are handled properly and glusterd functions correctly.
BZ#981692
Previously, glusterd logs displayed messages regarding quotad being stopped and started, since quotad was restarted for every quota operation, except for quota list. Now with this update, quotad is restarted only when quota is enabled or disabled on a volume. This reduces the number of log messages considerably.
BZ#902982
Previously, stat(3) system call would fail, after moving data into a quota enabled glusterFS volume via an NFS mount, using rsync. This failure can be observed when executing ls -l on files that were copied over using rsync utility. Now, with this update, the stat(3) system call does not fail.
BZ#977544
Previously, quota limit-usage command's quota configuration file updating logic had high latency. Now, with this update, the quota configuration file updating logic is improved and takes much lesser time.