Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

Chapter 2. What Changed in this Release?

2.1. What's New in this Release?

This section describes the key features and enhancements in the Red Hat Gluster Storage 3.3 release.
Resource limits now configurable through gdeploy
Administrators can now set limits on the resources that Red Hat Gluster Storage has access to by configuring a slice for gluster processes in their gdeploy configuration files.
For more information, see Managing Resource Usage in Red Hat Gluster Storage 3.3 Administration Guide.
statedump feature for application using gfapi enhancement
The statedump feature supports gathering of information from application using gfapi. With Red Hat Gluster Storage 3.3, administrators can take statedump for application using gfapi using the following command:
# gluster volume statedump vol host:pid
Executing the command places the statedump in the /var/run/gluster directory. The user running the application must have write permissions to /var/run/gluster.
As gluster group is created for application using gfapi, it does not require root privilege. The user executing the application needs must be added to the gluster group using the following command:
# usermod -a -G gluster qemu
For more information, see Viewing complete volume state with statedump in Red Hat Gluster Storage 3.3 Administration Guide.
Gluster block storage
Red Hat Gluster Storage 3.3 introduces block storage for containerized workloads with the gluster-block command line utility. Block storage supports only Container-Native Storage (CNS) and Container-Ready Storage(CRS) use cases. Currently, it enables the creation of high-performance individual storage units, allowing each unit to be treated as an independent disk drive and to support an individual file system.
Gluster-block aims to make the creation and maintenance of Gluster-backed block storage as simple as possible. With gluster-block you can provision block devices, export them as iSCSI LUNs across multiple nodes, and use the iSCSI protocol for data transfer.
For more information, see Storage Concepts in Red Hat Gluster Storage 3.3 Administration Guide.
Brick Multiplex support
Red Hat Gluster Storage 3.3 introduces brick multiplexing which supports only Container-Native Storage (CNS) and Container-Ready Storage(CRS) use cases. It allows administrators to reduce the number of ports and processes used by gluster bricks on the same server. When brick multiplexing is enabled, compatible bricks on the same server share a port and a process. Thus reducing per-brick memory usage and port consumption.
For more information, see Many Bricks per Node in Red Hat Gluster Storage 3.3 Administration Guide.
Performance improvements
find command on a volume
  • High CPU usage was experienced when a find command was executed on a volume with large number of files as the command accessed the versioning data of the files.
    With this release, object versioning is enabled only when BitRot detection is enabled for a volume. Thus, CPU usage is lower when the find command is executed on a volume with large number of files. Hence, improving performance when BitRot daemon is in disabled state, and a find command is executed on a volume with large number of files.
    For more information, see Detecting BitRot in Red Hat Gluster Storage 3.3 Administration Guide.
Parallel readdirp support
  • ​Now, readdirp fops are sent ​​parallelly to all the bricks. This enhances the performance for find and a recursive listing of small directories.
    For more information, see Enhancing Directory Listing Performance under Tuning Performance in Red Hat Gluster Storage 3.3 Administration Guide.
Negative lookup cache for Samba
  • With Red Hat Gluster Storage 3.3, negative lookup caching is developed for Samba to improve the small file workloads. This removes redundant lookup operations, improving speed for file and directory creation from Samba clients.
    For more information, see Enhancing File/Directory Create Performance in Red Hat Gluster Storage 3.3 Administration Guide.
Enhancement to glusterfind command
The command glusterfind now provides a query sub command that provides a list of changed files.
For more information, see Glusterfind Query under Glusterfind Configuration Options in Red Hat Gluster Storage 3.3 Administration Guide.
Configurable chunksize
Users can now configure the chunksize from the backend-setup. It is simpler compared to ‘pv’, ‘vg’ or ‘lv’. Previously, ‘lv’ module was the only way to use chunksize as a parameter.
Dynamic update of export configuration options
Most NFS-Ganesha export configuration options can be updated dynamically during normal operation without needing to export and re-export the volume.
Enhancement to geo-replication status command
The detailed geo-replication status command no longer requires master volume, slave host, and slave volume. It can be executed with or without these additional details. For example:
# gluster volume geo-replication status detail
For more information, see Displaying Geo-replication Status Information in Red Hat Gluster Storage 3.3 Administration Guide.
RAID 5 disk support
RAID 5 disks are now supported. RAID 5 disk-type is supported in gdeploy along with JBOD, RAID 6 and RAID 10.
For more information, see Configuration File in Red Hat Gluster Storage 3.3 Administration Guide.
NFS-Ganesha installation without firewalld
The dependency of firewalld for NFS-Ganesha installation has now been removed. Now, NFS-Ganesha is installed without firewalld.
For more information, see Prerequisites to run NFS-Ganesha in Red Hat Gluster Storage 3.3 Administration Guide.
gluster-swift packages updated to OpenStack Swift Newton 2.10.1
The gluster-swift packages are updated to OpenStack Swift Newton 2.10.1 to integrate with the supported version of OpenStack Swift as the previous version of OpenStack Swift (Kilo) has reached end of support.
Enhanced rebalance status
The command gluster volume volname rebalance status now provides an estimate of the time left to rebalance completion. Note that calculations are based on each brick having its own file system partition.
For more information, see Displaying Rebalance Progress in Red Hat Gluster Storage 3.3 Administration Guide.
Volume extension using Heketi
A new section has been added to the Red Hat Gluster Storage 3.3 Administration Guide, which contains instructions to expand a volume using Heketi.
For more information, see Expanding a Volume in Red Hat Gluster Storage 3.3 Administration Guide.
NFS chapter restructure
The NFS chapter in the Red Hat Gluster Storage 3.3 Administration Guide is re-structured with significant improvements. Some of the key updates are:
  • Clear division between Gluster-NFS and NFS-Ganesha.
  • For better clarity, large sections in the NFS-Ganesha chapter are divided into smaller sub sections.
  • To improve usability and ease of deployment/configuration, flow of content is revisited and additional details are added in the NFS-Ganesha section.
For more information, see NFS in Red Hat Gluster Storage 3.3 Administration Guide.
​Dispersed Volume Enhancements - new variants supported
​Dispersed volumes are based on erasure coding. Erasure coding is a method of data protection in which data is broken into fragments, expanded and encoded with redundant data pieces and stored across a set of different locations. This allows the recovery of the data stored on one or more bricks in case of failure. Dispersed volume requires less storage space when compared to a replicated volume. The following new variants are supported with this release:
  • 10 bricks with redundancy level 2 (8 + 2)
  • 20 bricks with redundancy level 4 (16 + 4)
For more information, see Expanding a Volume in Red Hat Gluster Storage 3.3 Administration Guide.