3.9 Release Notes
Release Notes for Container-Native Storage on OpenShift Container Platform 3.9
Chapter 1. What's New in this Release?
- Expanding the Persistent Volume Size: With this release, the persistent volume size for dynamically provisioned volumes on file storage can be increased by increasing the persistent volume claim. The parameter allowVolumeExpansion has to be set to "true" in the storage class file to enable this feature. For more information, refer https://access.redhat.com/documentation/en-us/container-native_storage/3.9/html/container-native_storage_for_openshift_container_platform/chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-OpenShift_Creating_Persistent_Volumes#sect_expanding_pv.
- Providing a Custom Volume Name Prefix for a Persistent Volume: You can provide a custom volume name prefix to the persistent volume that is created in both file and block storage. By providing a custom volume name prefix, users can now easily search/filter the volumes based on:
The parameter volumenameprefix has to be included in the storage class file to enable this feature. For more information to enable this for block storage, refer https://access.redhat.com/documentation/en-us/container-native_storage/3.9/html/container-native_storage_for_openshift_container_platform/Block_Storage#sect_block-custom-volname-prefix. For more information to enable this for file storage, refer, https://access.redhat.com/documentation/en-us/container-native_storage/3.9/html/container-native_storage_for_openshift_container_platform/chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-OpenShift_Creating_Persistent_Volumes#sect_file-custom-volname-prefix.
- Any string that was provided as the field value of "volnameprefix" in the storageclass file.
- Persistent volume claim name.
- Project/Namespace name.
- Viewing Volume Metrics: With this release you can view the details and various metrics that show the persistent volume consumption in addition to the allocated persistent volume size for dynamically provisioned volumes on file storage. The different metrics that can be viewed on Prometheus are:
For more information about Volume metrics, refer https://access.redhat.com/documentation/en-us/container-native_storage/3.9/html/container-native_storage_for_openshift_container_platform/chap-documentation-red_hat_gluster_storage_container_native_with_openshift_platform-openshift_creating_persistent_volumes#enable_vol_metrics.
kubelet_volume_stats_available_bytes: Number of available bytes in the volume.
kubelet_volume_stats_capacity_bytes: Capacity in bytes of the volume.
kubelet_volume_stats_inodes: Maximum number of inodes in the volume.
kubelet_volume_stats_inodes_free: Number of free inodes in the volume.
kubelet_volume_stats_inodes_used: Number of used inodes in the volume.
kubelet_volume_stats_used_bytes: Number of used bytes in the volume.
Chapter 2. Notable Bug Fixes
- Previously, deleting an heketi pod while some heketi operation was in progress would result in incomplete entries in the database. With this fix, such entries are marked "pending" until the operation is completed, thus leading to a consistent database view.
- Earlier, the 'device info' output displayed the state of the device as 'failed' after a device remove operation was completed. With this fix, the state of the device is changed to 'removed' which matches with the operation performed.
- Earlier, it was possible to run multiple device remove operations in parallel on the same device. This led to race conditions and database inconsistencies. With this fix, an error is returned while another device remove operation on the same device is already in progress.
- Previously, the gluster-block provisioner did not identify the storage units correctly in the PVC. For example, it would identify 1 as 1GiB by default and the provisioner would fail on 1Gi. With this enhancement, gluster-block provisioner identifies the storage units correctly, ie, 1 will be treated as 1 byte, 1Gi will be treated as 1 GibiByte, and 1Ki will be treated as 1KibiByte.
Chapter 3. Known Issues
- In a gluster cluster with more than three nodes, if one or more nodes are down, but at least three nodes are up, heketi intermittently fails to create new replica 3 volumes, even if the healthy nodes have sufficient space available.To workaround this issue, execute the following commend on the unavailable nodes, which makes heketi volume creation 100% reliable:
# heketi-cli node disable <node-id>After the nodes are available again, execute the following command to let heketi take this node into account again when creating volumes:
heketi-cli node enable <node-id>
- Volumes that were created using Container-Native Storage 3.5 or previous do not have the GID stored in heketi database. Hence, when a volume expansion is performed, new bricks do not get the group ID set on them which might lead to I/O errors.
- The following two lines might be repeatedly logged in the rhgs-server-docker container/gluster container logs.
[MSGID: 106006] [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs has disconnected from glusterd. [socket.c:701:__socket_rwv] 0-nfs: readv on /var/run/gluster/1ab7d02f7e575c09b793c68ec2a478a5.socket failed (Invalid argument)These logs are added as glusterd is unable to start the NFS service. There is no functional impact as NFS export is not supported in Containerized Red Hat Gluster Storage.
Appendix A. Revision History
|Revision 1.0-4||Thu Apr 05 2018|
|Revision 1.0-3||Wed Apr 04 2018|
|Revision 1.0-2||Tue Mar 27 2018|
|Revision 1.0-1||Wed Mar 14 2018|