Chapter 2. New features and enhancements

This section describes new features and enhancements in Red Hat OpenShift AI 2-latest.

2.1. New features

Distributed workloads

Distributed workloads enable data scientists to use multiple cluster nodes in parallel for faster, more efficient data processing and model training. The CodeFlare framework simplifies task orchestration and monitoring, and offers seamless integration for automated resource scaling and optimal node utilization with advanced GPU support.

Designed for data scientists, the CodeFlare framework enables direct workload configuration from Jupyter Notebooks or Python code, ensuring a low barrier of adoption, and streamlined, uninterrupted workflows. Distributed workloads significantly reduce task completion time, and enable the use of larger datasets and more complex models.

Authorization provider for single-model serving platform
You can now add Authorino as an authorization provider for the single-model serving (KServe) platform. Adding an authorization provider allows you to enable token authorization for models that you deploy on the platform, which ensures that only authorized parties can make inference requests to the models.

2.2. Enhancements

Improved Data Science Projects user interface
The Data Science Projects user interface (UI) has been redesigned to simplify accessing and getting started with different project components. The new design includes an updated layout, a more visually-oriented interface, and additional UI text to provide an overview of each project component.
Support for Kubeflow Pipelines v2 in data science pipelines
To keep OpenShift AI updated with the latest features, data science pipelines are now based on KubeFlow Pipelines (KFP) version 2.0. Data Science Pipelines (DSP) 2.0 is enabled and deployed by default in OpenShift AI. For more information, see Enabling Data Science Pipelines 2.0.
Important

Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. It is no longer possible to deploy, view, or edit the details of pipelines that are based on DSP 1.0 from the dashboard in OpenShift AI 2-latest. If you already use data science pipelines, Red Hat recommends that you stay on OpenShift AI 2.8 until full feature parity in DSP 2.0 has been delivered in a stable OpenShift AI release and you are ready to migrate to the new pipeline solution.

DSP 2.0 contains an installation of Argo Workflows. OpenShift AI does not support direct customer usage of this installation of Argo Workflows. To install or upgrade to OpenShift AI 2.9 with DSP 2.0, ensure that there is no existing installation of Argo Workflows on your cluster.

If you want to use existing pipelines and workbenches with DSP 2.0 after upgrading to OpenShift AI 2-latest, you must update your workbenches to use the 2024.1 notebook image version and then manually migrate your pipelines from DSP 1.0 to 2.0. For more information, see Upgrading to DSP 2.0.

Updated workbench images
Preinstalled packages for workbench images have been updated with the 2024.1 image versions. You can optimize your development environment by using the latest workbench image versions. Python packages used in workbench images include advancements in the Python ecosystem, such as PyTorch and TensorFlow. Operating systems supporting the code-server, RStudio, Elyra, Habana, and CUDA workbench images have also received updates for their tools.