Chapter 3. Resolved issues
This section describes notable issues that have been resolved in Red Hat OpenShift Data Science.
ODH-DASHBOARD-1639 - Wrong TLS value in dashboard route
Previously, when a route was created for the OpenShift Data Science dashboard on OpenShift, the
tls.termination field had an invalid default value of
Reencrypt. This issue is now resolved. The new value is
ODH-DASHBOARD-1638 - Name placeholder in Triggered Runs tab shows Scheduled run name
Previously, when you clicked Pipelines > Runs and then selected the Triggered tab to configure a triggered run, the example value shown in the Name field was
Scheduled run name. This issue is now resolved.
ODH-DASHBOARD-1547 - "We can’t find that page" message displayed in dashboard when pipeline operator installed in background
Previously, when you used the Data Science Pipelines page of the dashboard to install the OpenShift Pipelines Operator, when the Operator installation was complete, the page refreshed to show a "We can’t find that page" message. This issue is now resolved. When the Operator installation is complete, the dashboard redirects you to the Pipelines page, where you can create a pipeline server.
ODH-DASHBOARD-1545 - Dashboard keeps scrolling to bottom of project when Models tab is expanded
Previously, on the Data Science Projects page of the dashboard, if you clicked the Deployed models tab to expand it and then tried to perform other actions on the page, the page automatically scrolled back to the Deployed models section. This affected your ability to perform other actions. This issue is now resolved.
NOTEBOOKS-156 - Elyra included an example runtime called Test
Previously, Elyra included an example runtime configuration called
Test. If you selected this configuration when running a data science pipeline, you could see errors. The
Test configuration has now been removed.
RHODS-8939 - Default shared memory for a Jupyter notebook created in a previous release causes a runtime error
Starting with release 1.3.1, this issue is fixed and any new notebook’s shared memory is set to the size of the node.
For a Jupyter notebook created in a release earlier than 1.3.1, the default shared memory for a Jupyter notebook is set to 64 Mb and you cannot change this default value in the notebook configuration.
To fix this issue, you must recreate the notebook or follow the process described in the Known Issues section of these release notes.
RHODS-8932 - Incorrect cron format was displayed by default when scheduling a recurring pipeline run
When you scheduled a recurring pipeline run by configuring a cron job, the OpenShift Data Science interface displayed an incorrect format by default. It now displays the correct format.
RHODS-9374 - Pipelines with non-unique names did not appear in the data science project user interface
If you launched a notebook from a Jupyter application that supported Elyra, or if you used a workbench, when you submitted a pipeline to be run, pipelines with non-unique names did not appear in the Pipelines section of the relevant data science project page or the Pipelines heading of the data science pipelines page. This issue has now been resolved.
RHODS-9329 - Deploying a custom model-serving runtime could result in an error message
Previously, if you used the OpenShift Data Science dashboard to deploy a custom model-serving runtime, the deployment process could fail with an
Error retrieving Serving Runtime message. This issue is now resolved.
RHODS-9064 - After upgrade, the Data Science Pipelines tab was not enabled on the OpenShift Data Science dashboard
When you upgraded from OpenShift Data Science 1.26 to OpenShift Data Science 1.28, the Data Science Pipelines tab was not enabled in the OpenShift Data Science dashboard. This issue is resolved in OpenShift Data Science 1.29.
RHODS-9443 - Exporting an Elyra pipeline exposed S3 storage credentials in plain text
In OpenShift Data Science 1.28.0, when you exported an Elyra pipeline from JupyterLab in Python DSL format or YAML format, the generated output contained S3 storage credentials in plain text. This issue has been resolved in OpenShift Data Science 1.28.1. However, after you upgrade to OpenShift Data Science 1.28.1, if your deployment contains a data science project with a pipeline server and a data connection, you must perform the following additional actions for the fix to take effect:
- Refresh your browser page.
- Stop any running workbenches in your deployment and restart them.
Furthermore, to confirm that your Elyra runtime configuration contains the fix, perform the following actions:
- In the left sidebar of JupyterLab, click Runtimes ( ).
Hover the cursor over the runtime configuration that you want to view and click the Edit button ( ).
The Data Science Pipelines runtime configuration page opens.
KUBERNETES_SECRETis defined as the value in the Cloud Object Storage Authentication Type field.
- Close the runtime configuration without changing it.
RHODS-8460 - When editing the details of a shared project, the user interface remained in a loading state without reporting an error
When a user with permission to edit a project attempted to edit its details, the user interface remained in a loading state and did not display an appropriate error message. Users with permission to edit projects cannot edit any fields in the project, such as its description. Those users can edit only components belonging to a project, such as its workbenches, data connections, and storage.
The user interface now displays an appropriate error message and does not try to update the project description.
RHODS-8482 - Data science pipeline graphs did not display node edges for running pipelines
If you ran pipelines that did not contain Tekton-formatted
when expressions in their YAML code, the OpenShift Data Science user interface did not display connecting edges to and from graph nodes. For example, if you used a pipeline containing the
runAfter property or
Workspaces, the user interface displayed the graph for the executed pipeline without edge connections. The OpenShift Data Science user interface now displays connecting edges to and from graph nodes.
RHODS-8923 - Newly created data connections were not detected when you attempted to create a pipeline server
If you created a data connection from within a Data Science project, and then attempted to create a pipeline server, the Configure a pipeline server dialog did not detect the data connection that you created. This issue is now fixed.
RHODS-8461 - When sharing a project with another user, the OpenShift Data Science user interface text was misleading
When you attempted to share a Data Science project with another user, the user interface text misleadingly implied that users could edit all of its details, such as its description. However, users can edit only components belonging to a project, such as its workbenches, data connections, and storage. This issue is now fixed and the user interface text no longer misleadingly implies that users can edit all of its details.
RHODS-8462 - Users with "Edit" permission could not create a Model Server
Users with "Edit" permissions can now create a Model Server without token authorization. Users must have "Admin" permissions to create a Model Server with token authorization.
RHODS-8796 - OpenVINO Model Server runtime did not have the required flag to force GPU usage
OpenShift Data Science includes the OpenVINO Model Server (OVMS) model-serving runtime by default. When you configured a new model server and chose this runtime, the Configure model server dialog enabled you to specify a number of GPUs to use with the model server. However, when you finished configuring the model server and deployed models from it, the model server did not actually use any GPUs. This issue is now fixed and the model server uses the GPUs.
RHODS-8861 - Changing the host project when creating a pipeline ran resulted in an inaccurate list of available pipelines
If you changed the host project while creating a pipeline run, the interface failed to make the pipelines of the new host project available. Instead, the interface showed pipelines that belong to the project you initially selected on the Data Science Pipelines > Runs page. This issue is now fixed. You no longer select a pipeline from the Create run page. The pipeline selection is automatically updated when you click the Create run button, based on the current project and its pipeline.
RHODS-8249 - Environment variables uploaded as ConfigMap were stored in Secret instead
Previously, in the OpenShift Data Science interface, when you added environment variables to a workbench by uploading a
ConfigMap configuration, the variables were stored in a
Secret object instead. This issue is now fixed.
RHODS-7975 - Workbenches could have multiple data connections
Previously, if you changed the data connection for a workbench, the existing data connection was not released. As a result, a workbench could stay connected to multiple data sources. This issue is now fixed.
RHODS-7948 - Uploading a secret file containing environment variables resulted in double-encoded values
Previously, when creating a workbench in a data science project, if you uploaded a YAML-based secret file containing environment variables, the environment variable values were not decoded. Then, in the resulting OpenShift secret created by this process, the encoded values were encoded again. This issue is now fixed.
RHODS-6429 - An error was displayed when creating a workbench with the Intel OpenVINO or Anaconda Professional Edition images
Previously, when you created a workbench with the Intel OpenVINO or Anaconda Professional Edition images, an error appeared during the creation process. However, the workbench was still successfully created. This issue is now fixed.
RHODS-6372 - Idle notebook culler did not take active terminals into account
Previously, if a notebook image had a running terminal, but no active, running kernels, the idle notebook culler detected the notebook as inactive and stopped the terminal. This issue is now fixed.
RHODS-5700 - Data connections could not be created or connected to when creating a workbench
When creating a workbench, users were unable to create a new data connection, or connect to existing data connections.
RHODS-6281 - OpenShift Data Science administrators could not access Settings page if an admin group was deleted from cluster
Previously, if a Red Hat OpenShift Data Science administrator group was deleted from the cluster, OpenShift Data Science administrator users could no longer access the Settings page on the OpenShift Data Science dashboard. In particular, the following behavior was seen:
- When an OpenShift Data Science administrator user tried to access the Settings → User management page, a "Page Not Found" error appeared.
Cluster administrators did not lose access to the Settings page on the OpenShift Data Science dashboard. When a cluster administrator accessed the Settings → User management page, a warning message appeared, indicating that the deleted OpenShift Data Science administrator group no longer existed in OpenShift. The deleted administrator group was then removed from
OdhDashboardConfig, and administrator access was restored.
This issue is now fixed.
RHODS-1968 - Deleted users stayed logged in until dashboard was refreshed
Previously, when a user’s permissions for the Red Hat OpenShift Data Science dashboard were revoked, the user would notice the change only after a refresh of the dashboard page.
This issue is now fixed. When a user’s permissions are revoked, the OpenShift Data Science dashboard locks the user out within 30 seconds, without the need for a refresh.
RHODS-6384 - A workbench’s data connection was incorrectly updated when creating a duplicated data connection
When creating a data connection that contained the same name as an existing data connection, the data connection creation failed, but the associated workbench still restarted and connected to the wrong data connection. This issue has been resolved. Workbenches now connect to the correct data connection.
RHODS-6370 - Workbenches failed to receive the latest toleration
Previously, to acquire the latest toleration, users had to attempt to edit the relevant workbench, make no changes, and save the workbench again. Users can now apply the latest toleration change by stopping and then restarting their data science project’s workbench.
RHODS-6779 - Models failed to be served after upgrading from OpenShift Data Science 1.20 to OpenShift Data Science 1.21
When upgrading from OpenShift Data Science 1.20 to OpenShift Data Science 1.21, the
modelmesh-serving pod attempted to pull a non-existent image, causing an image pull error. As a result, models were unable to be served using the model serving feature in OpenShift Data Science. The
odh-openvino-servingruntime-container-v1.21.0-15 image now deploys successfully.
RHODS-5945 - Anaconda Professional Edition could not be enabled in OpenShift Data Science
Anaconda Professional Edition could not be enabled for use in OpenShift Data Science. Instead, an
InvalidImageName error was displayed in the associated pod’s Events page. Anaconda Professional Edition can now be successfully enabled.
RHODS-5822 - Admin users were not warned when usage exceeded 90% and 100% for PVCs created by data science projects.
Warnings indicating when a PVC exceeded 90% and 100% of its capacity failed to display to admin users for PVCs created by data science projects. Admin users can now view warnings about when a PVC exceeds 90% and 100% of its capacity from the dashboard.
RHODS-5889 - Error message was not displayed if a data science notebook was stuck in "pending" status
If a notebook pod could not be created, the OpenShift Data Science interface did not show an error message. An error message is now displayed if a data science notebook cannot be spawned.
RHODS-5886 - Returning to the Hub Control Panel dashboard from the data science workbench failed
If you attempted to return to the dashboard from your workbench Jupyter notebook by clicking on File → Log Out, you were redirected to the dashboard and remained on a "Logging out" page. Likewise, if you attempted to return to the dashboard by clicking on File → Hub Control Panel, you were incorrectly redirected to the Start a notebook server page. Returning to the Hub Control Panel dashboard from the data science workbench now works as expected.
RHODS-6101 - Administrators were unable to stop all notebook servers
OpenShift Data Science administrators could not stop all notebook servers simultaneously. Administrators can now stop all notebook servers using the Stop all servers button and stop a single notebook by selecting Stop server from the action menu beside the relevant user.
RHODS-5891 - Workbench event log was not clearly visible
When creating a workbench, users could not easily locate the event log window in the OpenShift Data Science interface. The Starting label under the Status column is now underlined when you hover over it, indicating you can click on it to view the notebook status and the event log.
RHODS-6296 - ISV icons did not render when using a browser other than Google Chrome
When using a browser other than Google Chrome, not all ISV icons under Explore and Resources pages were rendered. ISV icons now display properly on all supported browsers.
RHODS-3182 - Incorrect number of available GPUs was displayed in Jupyter
When a user attempts to create a notebook instance in Jupyter, the maximum number of GPUs available for scheduling was not updated as GPUs are assigned. Jupyter now displays the correct number of GPUs available.
RHODS-5890 - When multiple persistent volumes were mounted to the same directory, workbenches failed to start
When mounting more than one persistent volume (PV) to the same mount folder in the same workbench, creation of the notebook pod failed and no errors were displayed to indicate there was an issue.
RHODS-5768 - Data science projects were not visible to users in Red Hat OpenShift Data Science
[DSP] suffix at the end of a project’s Display Name property caused the associated data science project to no longer be visible. It is no longer possible for users to remove this suffix.
RHODS-5701 - Data connection configuration details were overwritten
When a data connection was added to a workbench, the configuration details for that data connection were saved in environment variables. When a second data connection was added, the configuration details are saved using the same environment variables, which meant the configuration for the first data connection was overwritten. At the moment, users can add a maximum of one data connection to each workbench.
RHODS-5252 - The notebook Administration page did not provide administrator access to a user’s notebook server
The notebook Administration page, accessed from the OpenShift Data Science dashboard, did not provide the means for an administrator to access a user’s notebook server. Administrators were restricted to only starting or stopping a user’s notebook server.
RHODS-2438 - PyTorch and TensorFlow images were unavailable when upgrading
When upgrading from OpenShift Data Science 1.3 to a later version, PyTorch and TensorFlow images were unavailable to users for approximately 30 minutes. As a result, users were unable to start PyTorch and TensorFlow notebooks in Jupyter during the upgrade process. This issue has now been resolved.
RHODS-5354 - Environment variable names were not validated when starting a notebook server
Environment variable names were not validated on the Start a notebook server page. If an invalid environment variable was added, users were unable to successfully start a notebook. The environmental variable name is now checked in real-time. If an invalid environment variable name is entered, an error message displays indicating valid environment variable names must consist of alphabetic characters, digits, _, -, or ., and must not start with a digit.
RHODS-4617 - The Number of GPUs drop-down was only visible if there were GPUs available
Previously, the Number of GPUs drop-down was only visible on the Start a notebook server page if GPU nodes were available. The Number of GPUs drop-down now also correctly displays if an autoscaling machine pool is defined in the cluster, even if no GPU nodes are currently available, possibly resulting in the provisioning of a new GPU node on the cluster.
RHODS-5420 - Cluster admin did not get administrator access if it was the only user present in the cluster
Previously, when the cluster admin was the only user present in the cluster, it did not get Red Hat OpenShift administrator access automatically. Administrator access is now correctly applied to the cluster admin user.
RHODS-4321 - Incorrect package version displayed during notebook selection
The Start a notebook server page displayed an incorrect version number (11.4 instead of 11.7) for the CUDA notebook image. The version of CUDA installed is no longer specified on this page.
RHODS-5001 - Admin users could add invalid tolerations to notebook pods
An admin user could add invalid tolerations on the Cluster settings page without triggering an error. If a invalid toleration was added, users were unable to successfully start notebooks. The toleration key is now checked in real-time. If an invalid toleration name is entered, an error message displays indicating valid toleration names consist of alphanumeric characters, -, _, or ., and must start and end with an alphanumeric character.
RHODS-5100 - Group role bindings were not applied to cluster administrators
Previously, if you had assigned cluster admin privileges to a group rather than a specific user, the dashboard failed to recognize administrative privileges for users in the administrative group. Group role bindings are now correctly applied to cluster administrators as expected.
RHODS-4947 - Old Minimal Python notebook image persisted after upgrade
After upgrading from OpenShift Data Science 1.14 to 1.15, the older version of the Minimal Python notebook persisted, including all associated package versions. The older version of the Minimal Python notebook no longer persists after upgrade.
RHODS-4935 - Excessive "missing x-forwarded-access-token header" error messages displayed in dashboard log
rhods-dashboard pod’s log contained an excessive number of "missing x-forwarded-access-token header" error messages due to a readiness probe hitting the
/status endpoint. This issue has now been resolved.
RHODS-2653 - Error occurred while fetching the generated images in the sample Pachyderm notebook
An error occurred when a user attempted to fetch an image using the sample Pachyderm notebook in Jupyter. The error stated that the image could not be found. Pachyderm has corrected this issue.
RHODS-4584 - Jupyter failed to start a notebook server using the OpenVINO notebook image
Jupyter’s Start a notebook server page failed to start a notebook server using the OpenVINO notebook image. Intel has provided an update to the OpenVINO operator to correct this issue.
RHODS-4923 - A non-standard check box displayed after disabling usage data collection
After disabling usage data collection on the Cluster settings page, when a user accessed another area of the OpenShift Data Science dashboard, and then returned to the Cluster settings page, the Allow collection of usage data check box had a non-standard style applied, and therefore did not look the same as other check boxes when selected or cleared.
RHODS-4938 - Incorrect headings were displayed in the Notebook Images page
The Notebook Images page, accessed from the Settings page on the OpenShift Data Science dashboard, displayed incorrect headings in the user interface. The Notebook image settings heading displayed as BYON image settings, and the Import Notebook images heading displayed as Import BYON images. The correct headings are now displayed as expected.
RHODS-4818 - Jupyter was unable to display images when the NVIDIA GPU add-on was installed
The Start a notebook server page did not display notebook images after installing the NVIDIA GPU add-on. Images are now correctly displayed, and can be started from the Start a notebook server page.
RHODS-4797 - PVC usage limit alerts were not sent when usage exceeded 90% and 100%
Alerts indicating when a PVC exceeded 90% and 100% of its capacity failed to be triggered and sent. These alerts are now triggered and sent as expected.
RHODS-4366 - Cluster settings were reset on operator restart
When the OpenShift Data Science operator pod was restarted, cluster settings were sometimes reset to their default values, removing any custom configuration. The OpenShift Data Science operator was restarted when a new version of OpenShift Data Science was released, and when the node that ran the operator failed. This issue occurred because the operator deployed ConfigMaps incorrectly. Operator deployment instructions have been updated so that this no longer occurs.
RHODS-4318 - The OpenVINO notebook image failed to build successfully
The OpenVINO notebook image failed to build successfully and displayed an error message. This issue has now been resolved.
RHODS-3743 - Starburst Galaxy quick start did not provide download link in the instruction steps
The Starburst Galaxy quick start, located on the Resources page on the dashboard, required the user to open the
explore-data.ipynb notebook, but failed to provide a link within the instruction steps. Instead, the link was provided in the quick start’s introduction.
RHODS-1974 - Changing alert notification emails required pod restart
Changes to the list of notification email addresses in the Red Hat OpenShift Data Science Add-On were not applied until after the
rhods-operator pod and the
prometheus-* pod were restarted.
RHODS-2738 - Red Hat OpenShift API Management 1.15.2 add-on installation did not successfully complete
For OpenShift Data Science installations that are integrated with the Red Hat OpenShift API Management 1.15.2 add-on, the Red Hat OpenShift API Management installation process did not successfully obtain the SMTP credentials secret. Subsequently, the installation did not complete.
RHODS-3237 - GPU tutorial did not appear on dashboard
The "GPU computing" tutorial, located at Gtc2018-numba, did not appear on the Resources page on the dashboard.
RHODS-3069 - GPU selection persisted when GPU nodes were unavailable
When a user provisioned a notebook server with GPU support, and the utilized GPU nodes were subsequently removed from the cluster, the user could not create a notebook server. This occurred because the most recently used setting for the number of attached GPUs was used by default.
RHODS-3181 - Pachyderm now compatible with OpenShift Dedicated 4.10 clusters
Pachyderm was not initially compatible with OpenShift Dedicated 4.10, and so was not available in OpenShift Data Science running on an OpenShift Dedicated 4.10 cluster. Pachyderm is now available on and compatible with OpenShift Dedicated 4.10.
RHODS-2160 - Uninstall process failed to complete when both OpenShift Data Science and OpenShift API Management were installed
When OpenShift Data Science and OpenShift API Management are installed together on the same cluster, they use the same Virtual Private Cluster (VPC). The uninstall process for these Add-ons attempts to delete the VPC. Previously, when both Add-ons are installed, the uninstall process for one service was blocked because the other service still had resources in the VPC. The cleanup process has been updated so that this conflict does not occur.
RHODS-2747 - Images were incorrectly updated after upgrading OpenShift Data Science
After the process to upgrade OpenShift Data Science completed, Jupyter failed to update its notebook images. This was due to an issue with the image caching mechanism. Images are now correctly updating after an upgrade.
RHODS-2425 - Incorrect TensorFlow and TensorBoard versions displayed during notebook selection
The Start a notebook server page displayed incorrect version numbers (2.4.0) for TensorFlow and TensorBoard in the TensorFlow notebook image. These versions have been corrected to TensorFlow 2.7.0 and TensorBoard 2.6.0.
RHODS-24339 - Quick start links did not display for enabled applications
For some applications, the Open quick start link failed to display on the application’s card on the Enabled page. As a result, users did not have direct access to the quick start tour for the relevant application.
RHODS-2215 - Incorrect Python versions displayed during notebook selection
The Start a notebook server page displayed incorrect versions of Python for the TensorFlow and PyTorch notebook images. Additionally, the third integer of package version numbers is now no longer displayed.
RHODS-1977 - Ten minute wait after notebook server start fails
If the Jupyter leader pod failed while the notebook server was being started, the user could not access their notebook server until the pod restarted, which took approximately ten minutes. This process has been improved so that the user is redirected to their server when a new leader pod is elected. If this process times out, users see a 504 Gateway Timeout error, and can refresh to access their server.