Chapter 4. Known issues

This section describes known issues in Red Hat OpenShift Data Science and any known methods of working around the issues described.

IBM Watson Studio not available in OpenShift Data Science 1.3
IBM Watson Studio is not available in OpenShift Data Science 1.3 because it is not yet compatible with OpenShift Dedicated 4.9, which is used by OpenShift Data Science. There is currently no workaround for this issue.
Incorrect package versions displayed during notebook selection
The Start a notebook server page displays incorrect versions of Python for the Tensorflow and PyTorch notebook images. Both images display Python 3.8.6 but actually use Python 3.8.8.
Uninstall does not work when OpenShift API Management is also installed

When OpenShift Data Science and OpenShift API Management are installed together on the same cluster, they use the same Virtual Private Cloud (VPC). The uninstall process for these Add-ons attempts to delete the VPC. When both Add-ons are installed, the uninstall process for one service is blocked because the other service still has resources in the VPC.

Workaround: Before uninstalling OpenShift Data Science, run the following command to edit the operator definition, and remove any lines related to finalizers:

$ oc edit postgres.integreatly.org -n redhat-rhods-operator
Gateway errors during notebook server creation
If the leader JupyterHub pod fails during notebook server creation and a new leader pod is not selected before a user is redirected to their notebook server, users may see either a 502 Gateway Timeout error page or a 502 Bad Gateway error page. A new leader pod is selected after a few seconds. To recover from this error, wait a few seconds and then refresh the page.
Unnecessary warnings about missing Graphical Processing Units (GPUs)

The Tensorflow notebook image checks for graphical processing units (GPUs) whenever a notebook is run, and issues warnings about missing GPUs when none are present. These messages can safely be ignored, but you can disable them by running the following in a notebook when you launch a notebook server that uses the Tensorflow notebook image.

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
Cannot delete Git repositories in JupyterLab file browser

When a user attempts to delete a directory using the JupyterLab file browser, deletion fails if the directory is not empty. Hidden files such as the .git directory in a Git repository are not shown in the JupyterLab file browser, so Git repositories cannot be deleted from the JupyterLab file browser.

Workaround: To delete a Git repository from JupyterLab:

  1. Use the JupyterLab launcher to open a Terminal.
  2. Run the remove command, rm -rf <path>, replacing <path> with the path to the Git repository directory, for example, repos/my-project-repo.
Cannot set container size during notebook server creation
The Container size dropdown menu is intermittently not displayed on the Create a notebook server page. Users cannot select a container size other than the default if this menu does not display. You may be able to trigger the correct behavior by refreshing the page.
Previously authenticated sessions persist after user configuration change

When an administrator logs in to JupyterHub and later configures a custom user group to replace a default user group, the JupyterHub session that was initially authenticated using the default group persists for up to five minutes in the same browser window. This mainly affects administrators attempting to test permissions after adding or removing a custom user group for their identity provider.

Workaround: After changing user group configuration, manually log out of all sessions before testing updated user permissions.

OpenShift Data Science hyperlink still visible after uninstall
When the OpenShift Data Science Add-on is uninstalled from an OpenShift Dedicated cluster, the link to the OpenShift Data Science interface remains visible in the application launcher menu. Clicking this link results in a "Page Not Found" error because OpenShift Data Science is no longer available.
User sessions persist in some components
Although users of OpenShift Data Science and its components are authenticated through OpenShift, session management is separate from authentication. This means that logging out of OpenShift Dedicated or OpenShift Data Science does not affect a logged in JupyterHub session running on those platforms. When a user’s permissions change, that user must log out of all current sessions so that changes take effect.
Deleted users stay logged in to JupyterHub for up to 5 minutes
When a user’s permissions for JupyterHub are revoked, it takes up to five minutes for JupyterHub to log the user out. After a user has been removed from a valid user group, the user is able to spawn a new notebook server for about 30 seconds, and is able to continue working in JupyterLab for up to five minutes before they are logged out.
Changing alert notification emails requires pod restart

Changes to the list of notification email addresses in the Red Hat OpenShift Data Science Add-On are not applied until after the rhods-operator pod and the prometheus-* pod are restarted.

Workaround: To apply the changed configuration:

  1. Change into the redhat-ods-operator project and restart the rhods-operator pod.
  2. Wait for the rhods-operator pod to restart and return to the Running state.
  3. Change into the redhat-ods-monitoring project and restart the prometheus-* pod.
Removed users are shown in the JupyterHub administrative interface
When a user’s permission to access JupyterHub is revoked, they are prevented from creating or starting notebook servers, but their user name still appears in the list of users in the JupyterHub administrative interface. This happens because the cleanup step to remove that user from JupyterHub’s user list is missing. There is currently no customer workaround for this issue.
Notebook servers shut down after 24 hours
A JupyterHub user can be logged in for a maximum of 24 hours. After 24 hours, user credentials expire, the user is logged out of JupyterHub, and their notebook server pod is stopped and deleted regardless of any work running in the notebook server. There is currently no workaround for this issue.