Chapter 5. Developer previews

This section describes the developer preview features introduced in Red Hat OpenShift Data Foundation 4.12.

Important

Developer preview feature is subject to Developer preview support limitations. Developer preview releases are not intended to be run in production environments. The clusters deployed with the developer preview features are considered to be development clusters and are not supported through the Red Hat Customer Portal case management system. If you need assistance with developer preview features, reach out to the ocs-devpreview@redhat.com mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules.

5.1. Replica 1 (non resilient pool)

Applications that manage resiliency at the application level can now use storage class with single replica without data resiliency and high availability.

5.2. Network File System new capabilities

With this release, OpenShift Data Foundation provides Network File System (NFS) v4.1 and v4.2 service for any internal or external applications. The NFS service helps to migrate data from any environment to the OpenShift environment, for example, data migration from Red Hat Gluster Storage file system to OpenShift environment. NFS features also include volume expansion, snapshot creation and deletion, and volume cloning.

For more information, see Resource requirements for using Network File system and Creating exports using NFS.

5.3. Allow rook-ceph-operator-config environtmental variables to change defaults on upgrade

This update allows the rook-ceph-operator-config environmental variables to change the defaults when OpenShift Data Foundation is upgraded from version 4.5 to another version. This was not possible in the earlier versions.

5.4. Easy configuration of Ceph target size ratios

With this update, it is possible to change the target size ratio for any pool. In the previous versions, the pools deployed by rook in the Ceph cluster were assigned a target_ratio of 0.49 for both RBD and CephFS data and this could cause an under-allocation of PGs for the RBD pool and an over-allocation of PGs for the CephFS metadata pool. For more information, see Configuration of pool target size ratios.

5.5. Ephemeral storage for pods

Ephemeral volume support enable a user to specify ephemeral volumes in its pod specification and tie the lifecycle of the PVC with the pod.

5.6. Multisite Configurations for RGW in OpenShift Data Foundation

This feature supports multisite configurations such as Zone, ZoneGroup, or Realm for internal or external OpenShift Data Foundation clusters. This setup helps to replicate data into different sites and recover the data incase of failure.

5.7. Multicloud Object Gateway (MCG) only on Single Node Cluster

In this release, a lightweight object storage solutions is provided for single node OpenShift (SNO) clusters using MCG with backingstore layered on top of local storage. Previously, deployments running on SNO could only use block storage.

5.8. Using trusted certificates to ensure transactions are secure and private

This feature provide in-transit encryption for Object Storage between OpenShift Data Foundation and Red Hat Ceph Storage when using external mode. It enables all data to be encrypted in transit and at rest. For more information, see knowledgebase article on how to use trusted certificates.