Chapter 3. Known Issues

This chapter provides a list of known issues at the time of release.

3.1. Red Hat Gluster Storage

Issues related to Snapshot

  • BZ# 1201820
    When a snapshot is deleted, the corresponding file system object in the User Serviceable Snapshot is also deleted. Any subsequent file system access results in the snapshot daemon becoming unresponsive.
    Workaround: Ensure that you do not perform any file system operations on the snapshot that is about to be deleted.
  • BZ# 1160621
    If the current directory is not a part of the snapshot, for example, snap1, then the user cannot enter the .snaps/snap1 directory.
  • BZ# 1169790
    When a volume is down and there is an attempt to access .snaps directory, a negative cache entry is created in the Kernal Virtual File System (VFS) cache for the .snapsdirectory. After the volume is brought back online, accessing the .snaps directory fails with an ENOENT error because of the negative cache entry.
    Workaround: Clear the kernel VFS cache by executing the following command:
    # echo 3 >/proc/sys/vm/drop_caches
  • BZ# 1170145
    After the restore operation is complete, if restore a volume while you are in the .snaps directory, the following error message is displayed from the mount point - "No such file or directory".
    Workaround:
    1. Navigate to the parent directory of the .snaps directory.
    2. Drop VFS cache by executing the following command:
      # echo 3 >/proc/sys/vm/drop_caches
    3. Change to the .snaps folder.
  • BZ# 1170365
    Virtual inode numbers are generated for all the files in the .snaps directory. If there are hard links, they are assigned different inode numbers instead of the same inode number.
  • BZ# 1170502
    On enabling the User Serviceable Snapshot feature, if a directory or a file by name .snaps exists on a volume, it appears in the output of the ls -a command.
  • BZ# 1174618
    If the User Serviceable Snapshot feature is enabled, and a directory has a pre-existing .snaps folder, then accessing that folder can lead to unexpected behavior.
    Workaround: Rename the pre-existing .snaps folder with another name.
  • BZ# 1167648
    Performing operations which involve client graph changes such as volume set operations, restoring snapshot etc eventually leads to out of memory scenarios for the client processes which mount the volume.
  • BZ# 1133861
    New snap bricks fails to start if the total snapshot brick count in a node goes beyond 1K.
    Workaround: Deactivate unused snapshots.
  • BZ# 1126789
    If any node or glusterd service is down when snapshot is restored then any subsequent snapshot creation fails.
    Workaround: Do not restore a snapshot, if node or glusterd service is down.
  • BZ# 1139624
    While taking snapshot of a gluster volume, it creates another volume which is similar to the original volume. Gluster volume consumes some amount of memory when it is in started state, so as Snapshot volume. Hence, the system goes to out of memory state.
    Workaround: Deactivate unused snapshots to reduce the memory foot print.
  • BZ# 1129675
    If glusterd is down in one of the nodes in cluster or if the node itself is down, then performing a snapshot restore operation leads to the inconsistencies:
    • Executing gluster volume heal vol-name info command displays the error message Transport endpoint not connected.
    • Error occurs when clients try to connect to glusterd service.
    Workaround: Perform snapshot restore only if all the nodes and their corresponding glusterd services are running.
    Restart glusterd service using the following command.
    # service glusterd start
  • BZ# 1105543
    When a node with old snap entry is attached to the cluster, the old entries are propagated throughout the cluster and old snapshots which are not present are displayed.
    Workaround: Do not attach a peer with old snap entries.
  • BZ# 1104191
    The snapshot command fails if snapshot command is run simultaneously from multiple nodes when high write or read operation is happening on the origin or parent volume.
    Workaround: Avoid running multiple snapshot commands simultaneously from different nodes.
  • BZ# 1059158
    The NFS mount option is not supported for snapshot volumes.
  • BZ# 1113510
    The output of gluster volume info information (snap-max-hard-limit, snap-max-soft-limit) even though the values that are not set explicitly and must not be displayed.
  • BZ# 1111479
    Attaching a new node to the cluster while snapshot delete was in progress, deleted snapshots successfully but gluster snapshot list shows some of the snaps are still present.
    Workaround: Do not attach or detach new node to the trusted storage pool operation while snapshot is in progress.
  • BZ# 1092510
    If you create a snapshot when the rename of directory is in progress (here, its complete on hashed sub-volume but not on all of the sub-volumes), on snapshot restore, directory which was undergoing rename operation will have same GFID for both source and destination. Having same GFID is an inconsistency in DHT and can lead to undefined behavior.
    In DHT, a rename (source, destination) of directories is done first on hashed sub-volume and if successful, then on rest of the sub-volumes. At this point in time, if you have both source and destination directories present in the cluster with same GFID - destination on hashed sub-volume and source on rest of the sub-volumes. A parallel lookup (on either source or destination) at this time can result in creation of directories on missing sub-volumes - source directory entry on hashed and destination directory entry on rest of the sub-volumes. Hence, there would be two directory entries - source and destination - having same GFID.
  • BZ# 1112250
    Probing/detaching a new peer during any snapshot operation is not supported.
  • BZ# 1236149
    If a node/brick is down, the snapshot create command fails even with the force option.
  • BZ# 1240227
    LUKS encryption over LVM is currently not supported.
  • BZ# 1236025
    The time stamp of the files/dirs changes when one executes a snapshot restore, resulting in a failure to read the appropriate change logs. glusterfind pre fails with the following error: 'historical changelogs not available'
    Existing glusterfind sessions fail to work after a snapshot restore.
    Workaround: Gather the necessary information from existing glusterfind sessions, remove the sessions, perform a snapshot restore, and then create new glusterfind sessions.
  • BZ# 1160412
    During the update of the glusterfs-server package, warnings and fatal errors appear on-screen by librdmacm if the machine does not have an RDMA device.
    Workaround: You may safely ignore these errors if the configuration does not require Gluster to work with RDMA transport.
  • BZ# 1246183
    User Serviceable Snapshots is not supported on Erasure Coded (EC) volumes.

Issues related to Nagios

  • BZ# 1136207
    Volume status service shows All bricks are Up message even when some of the bricks are in unknown state due to unavailability of glusterd service.
  • BZ# 1109683
    When a volume has a large number of files to heal, the volume self heal info command takes time to return results and the nrpe plug-in times out as the default timeout is 10 seconds.
    Workaround:
    In /etc/nagios/gluster/gluster-commands.cfg increase the timeout of nrpe plug-in to 10 minutes by using the -t option in the command.
    Example: $USER1$/gluster/check_vol_server.py $ARG1$ $ARG2$ -o self-heal -t 600
  • BZ# 1094765
    When certain commands invoked by Nagios plug-ins fail, irrelevant outputs are displayed as part of performance data.
  • BZ# 1107605
    Executing sadf command used by the Nagios plug-ins returns invalid output.
    Workaround: Delete the datafile located at /var/log/sa/saDD where DD is current date. This deletes the datafile for current day and a new datafile is automatically created and which is usable by Nagios plug-in.
  • BZ# 1107577
    The Volume self heal service returns a WARNING when there unsynchronized entries are present in the volume, even though these files may be synchronized during the next run of self-heal process if self-heal is turned on in the volume.
  • BZ# 1121009
    In Nagios, CTDB service is created by default for all the gluster nodes regardless of whether CTDB is enabled on the Red Hat Gluster Storage node or not.
  • BZ# 1089636
    In the Nagios GUI, incorrect status information is displayed as Cluster Status OK : None of the Volumes are in Critical State, when volumes are utilized beyond critical level.
  • BZ# 1111828
    In Nagios GUI, Volume Utilization graph displays an error when volume is restored using its snapshot.
  • Bricks with an UNKNOWN status are not considered as DOWN when volume status is calculated. When the glusterd service is down in one node, brick status changes to UNKNOWN while the volume status remains OK. You may think the volume is up and running when bricks may not be running. You are not able to detect the correct status.
    Workaround: You are notified when gluster is down and when bricks are in an UNKNOWN state.
  • When the configure-gluster-nagios command tries to get the IP Address and FLAGs for all network interfaces in the system, the following error is displayed:
    ERROR:root:unable to get ipaddr/flags for nic-name: [Errno 99] Cannot assign requested address when there is an issue while retrieving IP Address/Fags for a NIC.
    However, the command actually succeeded and configured the nagios correctly.

Issues related to Rebalancing Volumes

  • BZ# 1110282
    Executing rebalance status command, after stopping rebalance process, fails and displays a message that the rebalance process is not started.
  • BZ# 960910
    After executing rebalance on a volume, running the rm -rf command on the mount point to remove all of the content from the current working directory recursively without being prompted may return Directory not Empty error message.
  • BZ# 862618
    After completion of the rebalance operation, there may be a mismatch in the failure counts reported by the gluster volume rebalance status output and the rebalance log files.
  • BZ# 1039533
    While Rebalance is in progress, adding a brick to the cluster displays an error message, failed to get index in the gluster log file. This message can be safely ignored.
  • BZ# 1064321
    When a node is brought online after rebalance, the status displays that the operation is completed, but the data is not rebalanced. The data on the node is not rebalanced in a remove-brick rebalance operation and running commit command can cause data loss.
    Workaround: Run the rebalance command again if any node is brought down while rebalance is in progress, and also when the rebalance operation is performed after remove-brick operation.
  • BZ# 1237059
    The rebalance process on a distributed-replicated volume may stop if a brick from a replica pair goes down as some operations cannot be redirected to the other available brick. This causes the rebalance process to fail.
  • BZ# 1245202
    When rebalance is run as a part of remove-brick command, some files may be reported as split-brain and, therefore, not migrated, even if the files are not split-brain.
    Workaround: Manually copy the files that did not migrate from the bricks into the Gluster volume via the mount.

Issues related to Geo-replication

  • BZ# 1102524
    The Geo-replication worker goes to faulty state and restarts when resumed. It works as expected when it is restarted, but takes more time to synchronize compared to resume.
  • BZ# 987929
    While the rebalance process is in progress, starting or stopping a Geo-replication session results in some files not get synced to the slave volumes. When a Geo-replication sync process is in progress, running the rebalance command causes the Geo-replication sync process to stop. As a result, some files do not get synced to the slave volumes.
  • BZ# 1029799
    Starting a Geo-replication session when there are tens of millions of files on the master volume takes a long time to observe the updates on the slave mount point.
  • BZ# 1027727
    When there are hundreds of thousands of hard links on the master volume prior to starting the Geo-replication session, some hard links are not getting synchronized to the slave volume.
  • BZ# 984591
    After stopping a Geo-replication session, if the files synced to the slave volume are renamed then when Geo-replication starts again, the renamed files are treated anew, (without considering the renaming) and synced on to the slave volumes again. For example, if 100 files were renamed, you would find 200 files on the slave side.
  • BZ# 1235633
    Concurrent rmdir and lookup operations on a directory during a recursive remove may prevent the directory from being deleted on some bricks. The recursive remove operation fails with Directory not empty errors even though the directory listing from the mount point shows no entries.
    Workaround: Unmount the volume and delete the contents of the directory on each brick. If the affected volume is a geo-replication slave volume, run stop geo-rep session before deleting the contents of the directory on the bricks.
  • BZ# 1238699
    The Changelog History API expects brick path to remain the same for a session. However, on snapshot restore, brick path is changed. This causes the History API to fail and geo-rep to change to Faulty.
    Workaround: To resolve this issue, perform the following steps:
    1. After the snapshot restore, ensure the master and slave volumes are stopped.
    2. Backup the htime directory (of master volume).
      cp -a <brick_htime_path> <backup_path>

      Note

      Using -a option is important to preserve extended attributes.
      For example:
      cp -a /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changeslogs/htime  /opt/backup_htime/brick0_b0
    3. Run the following command to replace the OLD path in the htime file(s) with the new brick path:
      find <new_brick_htime_path> - name 'HTIME.*' -print0  | \
      xargs -0 sed -ci 's|<OLD_BRICK_PATH>|<NEW_BRICK_PATH>|g'
      where OLD_BRICK_PATH is the brick path of the current volume, and NEW_BRICK_PATH is the brick path after snapshot restore. For example:
      find /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changelogs/htime/ -name 'HTIME.*' -print0  | \
      xargs -0 sed -ci 's|/bricks/brick0/b0/|/var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/|g'
    4. Start the Master and Slave volumes and Geo-replication session on the restored volume. The status should update to Active.
  • BZ# 1240333
    Concurrent rename and lookup operations on a directory can cause both old and new directories to be "healed." Both directories will exist at the end of the operation and will have the same GFID. Clients might be unable to access some of the contents of the directory.
    Workaround: Contact Red Hat Support Services.

Issues related to Self-heal

  • BZ# 1063830
    Performing add-brick or remove-brick operations on a volume having replica pairs when there are pending self-heals can cause potential data loss.
    Workaround: Ensure that all bricks of the volume are online and there are no pending self-heals. You can view the pending heal info using the command gluster volume heal volname info.
  • BZ# 1230092
    When you create a replica 3 volume, client quorum is enabled and set to auto by default. However, it does not get displayed in gluster volume info.
  • BZ# 1233608
    When cluster.data-self-heal, cluster.metadata-self-heal and cluster.entry-self-heal are set to off (through volume set commands), the Gluster CLI to resolve split-brain fails with File not in split brain message (even though the file is in split brain).
  • BZ# 1240658
    When files are accidentally deleted from a brick in a replica pair in the back-end, and gluster volume heal VOLNAME full is run, then there is a chance that the files may not get healed.
    Workaround: Perform a lookup on the files from the client (mount). This triggers the heal.
  • BZ# 1173519
    If you write to an existing file and go over the _AVAILABLE_BRICK_SPACE_, the write fails with an I/O error.
    Workaround: Use the cluster.min-free-disk option. If you routinely write files up to nGB in size, then you can set min-free-disk to an mGB value greater than n.
    For example, if your file size is 5GB, which is at the high end of the file size you will be writing, you might consider setting min-free-disk to 8 GB. This ensures that the file will be written to a brick with enough available space (assuming one exists).
    # gluster v set _VOL_NAME_ min-free-disk 8GB

Issues related to replace-brick operation

  • After the gluster volume replace-brick VOLNAME Brick New-Brick commit force command is executed, the file system operations on that particular volume, which are in transit, fail.
  • After a replace-brick operation, the stat information is different on the NFS mount and the FUSE mount. This happens due to internal time stamp changes when the replace-brick operation is performed.

Issues related to Directory Quota

  • BZ# 1021466
    After setting Quota limit on a directory, creating sub directories and populating them with files and renaming the files subsequently while the I/O operation is in progress causes a quota limit violation.
  • BZ# 998791
    During a file rename operation if the hashing logic moves the target file to a different brick, then the rename operation fails if it is initiated by a non-root user.
  • BZ# 1020713
    In a distribute or distribute replicate volume, while setting quota limit on a directory, if one or more bricks or one or more replica sets respectively, experience downtime, quota is not enforced on those bricks or replica sets, when they are back online. As a result, the disk usage exceeds the quota limit.
    Workaround: Set quota limit again after the brick is back online.
  • BZ# 1032449
    In the case when two or more bricks experience a downtime and data is written to their replica bricks, invoking the quota list command on that multi-node cluster displays different outputs after the bricks are back online.

Issues related to NFS

  • After you restart the NFS server, the unlock within the grace-period feature may fail and the locks help previously may not be reclaimed.
  • fcntl locking ( NFS Lock Manager) does not work over IPv6.
  • You cannot perform NFS mount on a machine on which glusterfs-NFS process is already running unless you use the NFS mount -o nolock option. This is because glusterfs-nfs has already registered NLM port with portmapper.
  • If the NFS client is behind a NAT (Network Address Translation) router or a firewall, the locking behavior is unpredictable. The current implementation of NLM assumes that Network Address Translation of the client's IP does not happen.
  • nfs.mount-udp option is disabled by default. You must enable it to use posix-locks on Solaris when using NFS to mount on a Red Hat Gluster Storage volume.
  • If you enable the nfs.mount-udp option, while mounting a subdirectory (exported using the nfs.export-dir option) on Linux, you must mount using the -o proto=tcp option. UDP is not supported for subdirectory mounts on the GlusterFS-NFS server.
  • For NFS Lock Manager to function properly, you must ensure that all of the servers and clients have resolvable hostnames. That is, servers must be able to resolve client names and clients must be able to resolve server hostnames.

Issues related to NFS-Ganesha

  • BZ# 1224250
    Same epoch values on all the NFS-Ganesha heads results in NFS server sending NFS4ERR_FHEXPIRED error instead of NFS4ERR_STALE_CLIENTID or NFS4ERR_STALE_STATEID after failover. This results in NFSv4 clients not able to recover locks after failover.
    Workaround: To use NFSv4 locks, specify different epoch values for each NFS-Ganesha head before setting up the NFS-Ganesha cluster.
  • BZ# 1226874
    If NFS-Ganesha is started before you set up an HA cluster, there is no way to validate the cluster state and stop NFS-Ganesha if the set up fails. Even if the HA cluster set up fails, the NFS-Ganesha service continues running.
    Workaround: If HA set up fails, run service nfs-ganesha stop on all nodes in the HA cluster.
  • BZ# 1227169
    Executing the rpcinfo -p command after stopping nfs-ganesha displays NFS related programs.
    Workaround: Use rpcinfo -d on each of the NFS related services listed in rpcifnfo -p . Alternatively, restart the rpcbind service using the following command:
    #service rpcbind restart
  • BZ# 1228196
    If you have less than three nodes, pacemaker shuts down HA.
    Workaround: To restore HA, add a third node with ganesha-ha.sh --add $path-to-config $node $virt-ip .
  • BZ# 1233533
    When the nfs-ganesha option is turned off, gluster NFS may not restart automatically.. The volume may no longer be exported from the storage nodes via a nfs-server.
    Workaround:
    1. Turn off the nfs.disable option for the volume:
      gluster volume set volume name nfs.disable off
    2. Restart the volume:
      gluster volume start volume name force
  • BZ# 1235597
    On the nfs-ganesha server IP, showmount does not display a list of the clients mounting from that host.
  • BZ# 1236017
    When a server is rebooted, services such as pcsd and nfs-ganesha do not start by default. nfs-ganesha won't be running on the rebooted node, so it won't be part of the HA-cluster.
    Workaround: Manually restart the services after a server reboot.
  • BZ# 1238561
    Although DENY entries are handled in nfs4_setfacl, they cannot be stored directly in the backend (DENY entry cannot convert in POSIX ACL). DENY entries won't display in nfs4_getfacl. If the permission bit is not set in ALLOW entry, it is considered as DENY.

    Note

    Use minimal required permission for EVERYONE@Entry, otherwise it will result in undesired behavior of nfs4_acl.
  • BZ# 1240258
    When files and directories are created on the mount point with root squash enabled for nfs-ganesha, executing ls command displays user:group as 4294967294:4294967294 instead of nfsnobody:nfsnobody. This is because the client maps only 16 bit unsigned representation of -2 to nfsnobody whereas 4294967294 is 32 bit equivalent of -2.
    This is currently a limitation in upstream nfs-ganesha.
  • BZ# 1240502
    Delete-node logic does not remove the VIP of the deleted node from ganesha-ha.conf. The VIP exists even after the node is deleted from the HA cluster.
    Workaround: Manually delete the entry if it is not required for subsequent operations.
  • BZ# 1241436
    The output of the refresh-config option is not meaningful.
    Workaround: If the output displays as follows, 'method return sender=:1.61 -> dest=:1.65 reply_serial=2', consider it successful.
  • BZ# 1242148
    When ACLs are enabled, if you rename a file, an error is thrown on nfs4 mount. However, the operation is successful. It may take a few seconds to complete.
  • BZ# 1246007
    NFS-Ganesha export files are not copied as part of snapshot creation. As a result, snapshot restore will not work with NFS-Ganesha.

Issues related to Object Store

  • The GET and PUT commands fail on large files while using Unified File and Object Storage.
    Workaround: You must set the node_timeout=60 variable in the proxy, container, and the object server configuration files.

Issues related to Red Hat Gluster Storage Volumes:

  • BZ# 986090
    Currently, the Red Hat Gluster Storage server has issues with mixed usage of hostnames, IPs and FQDNs to refer to a peer. If a peer has been probed using its hostname but IPs are used during add-brick, the operation may fail. It is recommended to use the same address for all the operations, that is, during peer probe, volume creation, and adding/removing bricks. It is preferable if the address is correctly resolvable to a FQDN.
  • BZ# 852293
    The management daemon does not have a rollback mechanism to revert any action that may have succeeded on some nodes and failed on the those that do not have the brick's parent directory. For example, setting the volume-id extended attribute may fail on some nodes and succeed on others. Because of this, the subsequent attempts to recreate the volume using the same bricks may fail with the error brickname or a prefix of it is already part of a volume.
    Workaround:
    1. You can either remove the brick directories or remove the glusterfs-related extended attributes.
    2. Try creating the volume again.
  • BZ# 913364
    An NFS server reboot does not reclaim the file LOCK held by a Red Hat Enterprise Linux 5.9 client.
  • BZ# 1030438
    On a volume, when read and write operations are in progress and simultaneously a rebalance operation is performed followed by a remove-brick operation on that volume, then the rm -rf command fails on a few files.
  • BZ# 1224064
    Glusterfind is a independent tool and is not integrated with glusterd. When a Gluster volume is deleted, respective glusterfind session directories/files for that volume persist.
    Workaround: Manually, delete the Glusterfind session directory in each node for the Gluster volume in the following directory /var/lib/glusterd/glusterfind
  • BZ# 1224153
    When a brick process dies, BitD tries to read from the socket used to communicate with the corresponding brick. If it fails, BitD logs the failure to the log file. This results in many messages in the log files, leading to the failure of reading from the socket and an increase in the size of the log file.
  • BZ# 1224162
    Due to an unhandled race in the RPC interaction layer, brick down notifications may result in corrupted data structures being accessed. This can lead to NULL pointer access and segfault.
    Workaround: When the Bitrot daemon (bitd) crashes (segfault), you can use volume start VOLNAME force to restart bitd on the node(s) where it crashed.
  • BZ# 1224880
    If you delete a gluster volume before deleting the Glusterfind session, then the Glusterfind session can't be deleted. A new session can't be created with same name.
    Workaround: In all the nodes that were part of the volume before you deleted it, manually cleanup the session directory in: /var/lib/glusterd/glusterfind/SESSION/VOLNAME
  • BZ# 1226995
    Using brick up time to calculate the next scrub time results in premature filesystem scrubbing:
    • Brick up-time: T Next scrub time (frequency hourly): T + 3600 seconds
    • After 55 minutes (T + 3300 seconds), the scrub frequency is changed to daily. Therefore, the next scrub would happen at (T + 86400 seconds) rather than (current_time + 86400 seconds).
  • BZ# 1227672
    A successful scrub of the filesystem (objects) is required to see if a given object is clean or corrupted. When a file gets corrupted and a scrub hasn't been run on the filesystem, there is a good chance of replicating corrupted objects in cases when the brick holding the good copy was offline when I/O was performed.
    Workaround: Objects need to be checked on demand for corruption during healing.
  • BZ# 1231150
    When you set diagnostic.client-log-level DEBUG, and then reset the diagnostic.client-log-level option, DEBUG logs continue to appear in log files. INFO log level is enabled by default.
    Workaround: Restart the volume using gluster volume start VOLNAME force, to reset log level defaults.
  • BZ# 1233213
    If you run a gluster volume info --xml command on a newly probed peer without running any other gluster volume command in between, brick UUIDs will appear as null ('00000000-0000-0000-0000-000000000000').
    Workaround: Run any volume command (excluding gluster volume list and gluster volume get) before you run the info command. Brick UUIDs correctly populate.
  • BZ# 1236153
    The shared storage Gluster command accepts only the cluster.enable-shared-storage key. It should also accept the enable-shared-storage key.
  • BZ# 1236503
    Disabling cluster.enable-shared-storage results in the deletion of any volume named gluster_shared_storage even if it is a pre-existing volume.
  • BZ# 1237022
    If you have a cluster with more than one node and try to perform a peer probe from a node that is not part of the cluster, the peer probe fails without a meaningful notification.
  • BZ# 1241314
    The volume get VOLNAME enable-shared-storage option always shows as disabled, even when it is enabled.
    Workaround:gluster volume info VOLNAME command shows the correct status of the enable-shared-storage option.
  • BZ# 1241336
    When an Red Hat Gluster Storage node is shut down due to power failure or hardware failure, or when the network interface on a node goes down abruptly, subsequent gluster commands may time out. This happens because the corresponding TCP connection remains in the ESTABLISHED state.
    You can confirm this by executing the following command: ss -tap state established '( dport = :24007 )' dst IP-addr-of-powered-off-RHGS-node
    Workaround: Restart glusterd service on all other nodes.
  • BZ# 1223306
    gluster volume heal VOLNAME info shows stale entries, even after the file is deleted. This happens due to a rare case when the gfid-handle of the file is not deleted.
    Workaround: On the bricks where the stale entries are present, for example, <gfid:5848899c-b6da-41d0-95f4-64ac85c87d3f>, perform the following steps:
    • 1) Check if the file's gfid handle is not deleted.
      # find <brick-path>/.glusterfs -type f -links 1 and check if the file <brick-path>/.glusterfs/58/48/5848899c-b6da-41d0-95f4-64ac85c87d3f appears in the output.
    • If it appears in the output, you must delete that file.
      # rm <brick-path>/.glusterfs/58/48/5848899c-b6da-41d0-95f4-64ac85c87d3f
  • BZ# 1224180
    In some cases, operations on the mount displays error: Input/Output error instead of Disk quota exceeded message after the quota limit is exceeded.
  • BZ# 1244759
    Sometimes gluster volume heal VOLNAME info shows some symlinks which need to be healed for hours.
    To confirm this issue, the files must have the following extended attributes:
    # getfattr -d -m. -e hex -h /path/to/file/on/brick | grep trusted.ec
    
    Example output: 
    trusted.ec.dirty=0x3000
    trusted.ec.size=0x3000
    trusted.ec.version=0x30000000000000000000000000000001
    The first four digits must be 3000 and the file must be a symlink/softlink.
    Workaround: Execute the following commands on the files in each brick and ensure to stop all operations on them.
    1. trusted.ec.size must be deleted.
      # setfattr -x trusted.ec.size /path/to/file/on/brick
    2. First 16 digits must have '0' in both trusted.ec.dirty and trusted.ec.version attributes and the rest of the 16 digits should remain as is. If the number of digits is less than 32, then use '0' s as padding.
      # setfattr -n trusted.ec.dirty -v 0x00000000000000000000000000000000 /path/to/file/on/brick
      # setfattr -n trusted.ec.version -v 0x00000000000000000000000000000001 /path/to/file/on/brick

Issues related to POSIX ACLs:

  • Mounting a volume with -o acl can negatively impact the directory read performance. Commands like recursive directory listing can be slower than normal.
  • When POSIX ACLs are set and multiple NFS clients are used, there could be inconsistency in the way ACLs are applied due to attribute caching in NFS. For a consistent view of POSIX ACLs in a multiple client setup, use the -o noac option on the NFS mount to disable attribute caching. Note that disabling the attribute caching option could lead to a performance impact on the operations involving the attributes.

Issues related to Samba

  • BZ# 1013151
    Accessing a Samba share may fail, if GlusterFS is updated while Samba is running.
    Workaround: On each node where GlusterFS is updated, restart Samba services after GlusterFS is updated.
  • BZ# 994990
    When the same file is accessed concurrently by multiple users for reading and writing. The users trying to write to the same file will not be able to complete the write operation because of the lock not being available.
    Workaround: To avoid the issue, execute the command:
    # gluster volume set VOLNAME storage.batch-fsync-delay-usec 0
  • BZ# 1031783
    If Red Hat Gluster Storage volumes are exported by Samba, NT ACLs set on the folders by Microsoft Windows clients do not behave as expected.
  • BZ# 1164778
    Any changes performed by an administrator in a Gluster volume's share section of smb.conf are replaced with the default Gluster hook scripts settings when the volume is restarted.
    Workaround: The administrator must perform the changes again on all nodes after the volume restarts.

General issues

  • If files and directories have different GFIDs on different back-ends, the glusterFS client may hang or display errors.
    Contact Red Hat Support for more information on this issue.
  • BZ# 1030962
    On installing the Red Hat Gluster Storage Server from an ISO or PXE, the kexec-tools package for the kdump service gets installed by default. However, the crashkernel=auto kernel parameter required for reserving memory for the kdump kernel, is not set for the current kernel entry in the bootloader configuration file, /boot/grub/grub.conf. Therefore the kdump service fails to start up with the following message available in the logs.
    kdump: No crashkernel parameter specified for running kernel
    Workaround: After installing the Red Hat Gluster Storage Server, the crashkernel=auto, or an appropriate crashkernel=sizeM kernel parameter can be set manually for the current kernel in the bootloader configuration file. After that, the Red Hat Gluster Storage Server system must be rebooted, upon which the memory for the kdump kernel is reserved and the kdump service starts successfully. Refer to the following link for more information on Configuring kdump on the Command Line
    Additional information: On installing a new kernel after installing the Red Hat Gluster Storage Server, the crashkernel=auto kernel parameter is successfully set in the bootloader configuration file for the newly added kernel.
  • BZ# 1058032
    While migrating VMs, libvirt changes the ownership of the guest image, unless it detects that the image is on a shared filesystem and the VMs can not access the disk imanges as the required ownership is not available.
    Workaround: Perform the steps:
    1. Power-off the VMs before migration.
    2. After migration is complete, restore the ownership of the VM Disk Image (107:107)
    3. Start the VMs after migration.
  • The glusterd service crashes when volume management commands are executed concurrently with peer commands.
  • BZ# 1130270
    If a 32 bit Samba package is installed before installing Red Hat Gluster Storage Samba package, the installation fails as Samba packages built for Red Hat Gluster Storage do not have 32 bit variants
    Workaround:Uninstall 32 bit variants of Samba packages.
  • BZ# 1139183
    The Red Hat Gluster Storage 3.0 version does not prevent clients with versions older Red Hat Gluster Storage 3.0 from mounting a volume on which rebalance is performed. Users with versions older than Red Hat Gluster Storage 3.0 mounting a volume on which rebalance is performed can lead to data loss.
    Workaround:You must install latest client version to avoid this issue.
  • BZ# 1127178
    If a replica brick goes down and comes up when rm -rf command is executed, the operation may fail with the message Directory not empty.
    Workaround: Retry the operation when there are no pending self-heals.
  • BZ# 1007773
    When remove-brick start command is executed, even though the graph change is propagated to the NFS server, the directory inodes in memory are not refreshed to exclude the removed brick. Hence, new files that are created may end up on the removed-brick.
    Workaround: If files are found on the removed-brick path after remove-brick commit, copy them via a gluster mount point before re-purposing the removed brick.
  • BZ# 1120437
    Executing peer-status command on probed host displays the IP address of the node on which the peer probe was performed.
    Example: When probing a peer, node B with hostname from node A, executing peer status command on node B, displays IP address of node A instead of its hostname.
    Workaround: Probe node A from node B with hostname of node A. For example, execute the command: # gluster peer probe HostnameA from node B.
  • BZ# 1122371
    The NFS server process and gluster self-heal daemon process restarts when gluster daemon process is restarted.
  • BZ# 1110692
    Executing remove-brick status command, after stopping remove-brick process, fails and displays a message that the remove-brick process is not started.
  • BZ# 1123733
    Executing a command which involves glusterd-glusterd communication gluster volume status immediately after one of the nodes is down hangs and fails after 2 minutes with cli-timeout message. The subsequent command fails with the error message Another transaction in progress for 10 mins (frame timeout).
    Workaround: Set a non-zero value for ping-timeout in /etc/glusterfs/glusterd.vol file.
  • BZ# 1136718
    The AFR self-heal can leave behind a partially healed file if the brick containing AFR self-heal source file goes down in the middle of heal operation. If this partially healed file is migrated before the brick that was down comes online again, the migrated file would have incorrect data and the original file would be deleted.
  • BZ# 1139193
    After add-brick operation, any application (like git) which attempts opendir on a previously present directory fails with ESTALE/ENOENT errors.
  • BZ# 1141172
    If you rename a file from multiple mount points, there are chances of losing the file. This issue is witnessed since mv command sends unlinks instead of renames when source and destination happens to be hard links to each other. Hence, the issue is in mv, distributed as part of coreutils in various Linux distributions.
    For example, if there are parallel renames of the form (mv a b) and (mv b a) where a and b are hard links to the same file, because of the above mentioned behavior of mv, unlink (a) and unlink (b) would be issued from both instances of mv. This results in losing both the links a and b and hence the file.
  • BZ# 979926
    When any process establishes a TCP connection with glusterfs servers of a volume using port > 1023, the server rejects the requests and the corresponding file or management operations fail. By default, glusterfs servers treat ports > 1023 as unprivileged.
    Workaround: To disable this behavior, enable rpc-auth-allow-insecure option on the volume using the steps given below:
    1. To allow insecure connections to a volume, run the following command:
      #gluster volume set VOLNAME rpc-auth-allow-insecure on
    2. To allow insecure connections to glusterd process, add the following line in /etc/glusterfs/glusterd.vol file:
      option rpc-auth-allow-insecure on
    3. Restart glusterd process using the following command:
      # service glusterd restart
    4. Restrict connections to trusted clients using the following command:
      #gluster volume set VOLNAME auth.allow IP address
  • BZ# 1139676
    Renaming a directory may cause both source and target directories to exist on the volume with the same GFID and make some files in these directories not visible from the mount point. The files will still be present on the bricks.
    Workaround: The steps to fix this issue are documented in: https://access.redhat.com/solutions/1211133
  • BZ# 1030309
    During directory creations attempted by geo-replication, though an mkdir fails with EEXIST, the directory might not have a complete layout for sometime and the directory creation fails with Directory exists message. This can happen if there is a parallel mkdir attempt on the same name. Till the other mkdir completes, layout is not set on the directory. Without a layout, entry creations within that directory fails.
    Workaround: Set the layout on those sub-volumes where the directory is already created by the parallel mkdir before failing the current mkdir with EEXIST.

    Note

    This is not a complete fix as the other mkdir might not have created directories on all sub-volumes. The layout is set on the sub-volumes where directory is already created. Any file or directory names which hash to these sub-volumes on which layout is set, can be created successfully.
  • BZ# 1238067
    In rare instances, glusterd may crash when it is stopped. The crash is due to a race between the clean up thread and the running thread and doesn't impact functionality. The clean up thread releases URCU resources while a running thread continues to try to access it, which results in a crash.
  • BZ# 1238171
    When an inode is unlinked from the backend (bricks) directly, the corresponding in-memory inode is not cleaned on subsequent lookup. This causes the recovery procedure using healing damons (such as AFR/EC self-heal) to not function as expected as the in-memory inode structure represents a corrupted backend object.
    Workaround: A patch is available. The object could still be recoverable when the inode is forgotten (due to memory pressure or brick restart). In such cases, accessing the object would trigger a successful self-heal and recover it.
  • BZ# 1241385
    Due to a code bug, the output prefix was not considered when updating the path of deleted entries. The output file/dir name will not have an output prefix.

Issues related to Red Hat Gluster Storage AMI

  • BZ# 1250821
    In the Red Hat Gluster Storage 3.1 on Red Hat Enterprise Linux 7 AMI, the Red Hat Enterprise Linux 7 server base repo is disabled by default. We must manually enable the repo to receive package updates from the Red Hat Enterprise Linux 7 server base repo.
    Workaround: To enable the repo manually, run the following command:
    yum-config-manager --enable rhui-REGION-rhel-server-releases
    Once enabled, the AMI will receive package updates from Red Hat Enterprise Linux 7 server base repo.

3.1.1. Issues Related to Upgrade

  • BZ# 1247515
    As part of the tiering feature, a new dictionary key value pair was introduced to send the number of bricks in the hot-tier. So glusterd will expect this key in a dictionary which was sent to other peers during the data exchange. Since one of the node runs Red Hat Gluster Storage 2.1, this key value pair is not sent which causes glusterd running on Red Hat Gluster Storage 3.1 to complain about the missing key value pair from the peer data.
    Workaround: No functionality issues. An error is displayed in glusterd logs.