Chapter 2. Serving small and medium-sized models

For deploying small and medium-sized models, OpenShift AI includes a multi-model serving platform that is based on the ModelMesh component. On the multi-model serving platform, multiple models can be deployed from the same model server and share the server resources.

2.1. Configuring model servers

2.1.1. Enabling the multi-model serving platform

To use the multi-model serving platform, you must first enable the platform.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • If you are using specialized OpenShift AI groups, you are part of the admin group (for example, rhoai-admins) in OpenShift.
  • Your cluster administrator has not edited the OpenShift AI dashboard configuration to disable the ability to select the multi-model serving platform, which uses the ModelMesh component. For more information, see Dashboard configuration options.

Procedure

  1. In the left menu of the OpenShift AI dashboard, click SettingsCluster settings.
  2. Locate the Model serving platforms section.
  3. Select the Multi-model serving platform checkbox.
  4. Click Save changes.

2.1.2. Adding a custom model-serving runtime for the multi-model serving platform

A model-serving runtime adds support for a specified set of model frameworks (that is, formats). By default, the multi-model serving platform includes the OpenVINO Model Server runtime. However, if this runtime doesn’t meet your needs (it doesn’t support a particular model format, for example), you can add your own, custom runtime.

As an administrator, you can use the Red Hat OpenShift AI dashboard to add and enable a custom model-serving runtime. You can then choose the custom runtime when you create a new model server for the multi-model serving platform.

Note

OpenShift AI enables you to add your own custom runtimes, but does not support the runtimes themselves. You are responsible for correctly configuring and maintaining custom runtimes. You are also responsible for ensuring that you are licensed to use any custom runtimes that you add.

Prerequisites

  • You have logged in to OpenShift AI as an administrator.
  • You are familiar with how to add a model server to your project. When you have added a custom model-serving runtime, you must configure a new model server to use the runtime.
  • You have reviewed the example runtimes in the kserve/modelmesh-serving repository. You can use these examples as starting points. However, each runtime requires some further modification before you can deploy it in OpenShift AI. The required modifications are described in the following procedure.

    Note

    OpenShift AI includes the OpenVINO Model Server runtime by default. You do not need to add this runtime to OpenShift AI.

Procedure

  1. From the OpenShift AI dashboard, click Settings > Serving runtimes.

    The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled.

  2. To add a custom runtime, choose one of the following options:

    • To start with an existing runtime (for example the OpenVINO Model Server runtime), click the action menu (⋮) next to the existing runtime and then click Duplicate.
    • To add a new custom runtime, click Add serving runtime.
  3. In the Select the model serving platforms this runtime supports list, select Multi-model serving platform.

    Note

    The multi-model serving platform supports only the REST protocol. Therefore, you cannot change the default value in the Select the API protocol this runtime supports list.

  4. Optional: If you started a new runtime (rather than duplicating an existing one), add your code by choosing one of the following options:

    • Upload a YAML file

      1. Click Upload files.
      2. In the file browser, select a YAML file on your computer. This file might be the one of the example runtimes that you downloaded from the kserve/modelmesh-serving repository.

        The embedded YAML editor opens and shows the contents of the file that you uploaded.

    • Enter YAML code directly in the editor

      1. Click Start from scratch.
      2. Enter or paste YAML code directly in the embedded editor. The YAML that you paste might be copied from one of the example runtimes in the kserve/modelmesh-serving repository.
  5. Optional: If you are adding one of the example runtimes in the kserve/modelmesh-serving repository, perform the following modifications:

    1. In the YAML editor, locate the kind field for your runtime. Update the value of this field to ServingRuntime.
    2. In the kustomization.yaml file in the kserve/modelmesh-serving repository, take note of the newName and newTag values for the runtime that you want to add. You will specify these values in a later step.
    3. In the YAML editor for your custom runtime, locate the containers.image field.
    4. Update the value of the containers.image field in the format newName:newTag, based on the values that you previously noted in the kustomization.yaml file. Some examples are shown.

      Nvidia Triton Inference Server
      image: nvcr.io/nvidia/tritonserver:23.04-py3
      Seldon Python MLServer
      image: seldonio/mlserver:1.3.2
      TorchServe
      image: pytorch/torchserve:0.7.1-cpu
  6. In the metadata.name field, ensure that the value of the runtime you are adding is unique (that is, the value doesn’t match a runtime that you have already added).
  7. Optional: To configure a custom display name for the runtime that you are adding, add a metadata.annotations.openshift.io/display-name field and specify a value, as shown in the following example:

    apiVersion: serving.kserve.io/v1alpha1
    kind: ServingRuntime
    metadata:
      name: mlserver-0.x
      annotations:
        openshift.io/display-name: MLServer
    Note

    If you do not configure a custom display name for your runtime, OpenShift AI shows the value of the metadata.name field.

  8. Click Add.

    The Serving runtimes page opens and shows the updated list of runtimes that are installed. Observe that the runtime you added is automatically enabled.

  9. Optional: To edit your custom runtime, click the action menu (⋮) and select Edit.

Verification

  • The custom model-serving runtime that you added is shown in an enabled state on the Serving runtimes page.

Additional resources

2.1.3. Adding a model server for the multi-model serving platform

When you have enabled the multi-model serving platform, you must configure a model server to deploy models. If you require extra computing power for use with large datasets, you can assign accelerators to your model server.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • If you use specialized OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift.
  • You have created a data science project that you can add a model server to.
  • You have enabled the multi-model serving platform.
  • If you want to use a custom model-serving runtime for your model server, you have added and enabled the runtime. See Adding a custom model-serving runtime.
  • If you want to use graphics processing units (GPUs) with your model server, you have enabled GPU support in OpenShift AI. See Enabling GPU support in OpenShift AI.

Procedure

  1. In the left menu of the OpenShift AI dashboard, click Data Science Projects.

    The Data Science Projects page opens.

  2. Click the name of the project that you want to configure a model server for.

    A project details page opens.

  3. Click the Models tab.
  4. Perform one of the following actions:

    • If you see a ​Multi-model serving platform tile, click Add model server on the tile.
    • If you do not see any tiles, click the Add model server button.

    The Add model server dialog opens.

  5. In the Model server name field, enter a unique name for the model server.
  6. From the Serving runtime list, select a model-serving runtime that is installed and enabled in your OpenShift AI deployment.

    Note

    If you are using a custom model-serving runtime with your model server and want to use GPUs, you must ensure that your custom runtime supports GPUs and is appropriately configured to use them.

  7. In the Number of model replicas to deploy field, specify a value.
  8. From the Model server size list, select a value.
  9. Optional: If you selected Custom in the preceding step, configure the following settings in the Model server size section to customize your model server:

    1. In the CPUs requested field, specify the number of CPUs to use with your model server. Use the list beside this field to specify the value in cores or millicores.
    2. In the CPU limit field, specify the maximum number of CPUs to use with your model server. Use the list beside this field to specify the value in cores or millicores.
    3. In the Memory requested field, specify the requested memory for the model server in gibibytes (Gi).
    4. In the Memory limit field, specify the maximum memory limit for the model server in gibibytes (Gi).
  10. Optional: From the Accelerator list, select an accelerator.

    1. If you selected an accelerator in the preceding step, specify the number of accelerators to use.
  11. Optional: In the Model route section, select the Make deployed models available through an external route checkbox to make your deployed models available to external clients.
  12. Optional: In the Token authorization section, select the Require token authentication checkbox to require token authentication for your model server. To finish configuring token authentication, perform the following actions:

    1. In the Service account name field, enter a service account name for which the token will be generated. The generated token is created and displayed in the Token secret field when the model server is configured.
    2. To add an additional service account, click Add a service account and enter another service account name.
  13. Click Add.

    • The model server that you configured appears in the Models tab for the project, in the Models and model servers list.
  14. Optional: To update the model server, click the action menu () beside the model server and select Edit model server.

2.1.4. Deleting a model server

When you no longer need a model server to host models, you can remove it from your data science project.

Note

When you remove a model server, you also remove the models that are hosted on that model server. As a result, the models are no longer available to applications.

Prerequisites

  • You have created a data science project and an associated model server.
  • You have notified the users of the applications that access the models that the models will no longer be available.
  • If you are using specialized OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins) in OpenShift.

Procedure

  1. From the OpenShift AI dashboard, click Data Science Projects.

    The Data Science Projects page opens.

  2. Click the name of the project from which you want to delete the model server.

    A project details page opens.

  3. Click the Models tab.
  4. Click the action menu () beside the project whose model server you want to delete and then click Delete model server.

    The Delete model server dialog opens.

  5. Enter the name of the model server in the text field to confirm that you intend to delete it.
  6. Click Delete model server.

Verification

  • The model server that you deleted is no longer displayed in the Models tab for the project.

2.2. Working with deployed models

2.2.1. Deploying a model by using the multi-model serving platform

You can deploy trained models on OpenShift AI to enable you to test and implement them into intelligent applications. Deploying a model makes it available as a service that you can access by using an API. This enables you to return predictions based on data inputs.

When you have enabled the multi-model serving platform, you can deploy models on the platform.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • If you are using specialized OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users) in OpenShift.
  • You have enabled the multi-model serving platform.
  • You have created a data science project and added a model server.
  • You have access to S3-compatible object storage.
  • For the model that you want to deploy, you know the associated folder path in your S3-compatible object storage bucket.

Procedure

  1. In the left menu of the OpenShift AI dashboard, click Data Science Projects.

    The Data Science Projects page opens.

  2. Click the name of the project that you want to deploy a model in.

    A project details page opens.

  3. Click the Models tab.
  4. Click Deploy model.
  5. Configure properties for deploying your model as follows:

    1. In the Model name field, enter a unique name for the model that you are deploying.
    2. From the Model framework list, select a framework for your model.

      Note

      The Model framework list shows only the frameworks that are supported by the model-serving runtime that you specified when you configured your model server.

    3. To specify the location of the model you want to deploy from S3-compatible object storage, perform one of the following sets of actions:

      • To use an existing data connection

        1. Select Existing data connection.
        2. From the Name list, select a data connection that you previously defined.
        3. In the Path field, enter the folder path that contains the model in your specified data source.
      • To use a new data connection

        1. To define a new data connection that your model can access, select New data connection.
        2. In the Name field, enter a unique name for the data connection.
        3. In the Access key field, enter the access key ID for the S3-compatible object storage provider.
        4. In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified.
        5. In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket.
        6. In the Region field, enter the default region of your S3-compatible object storage account.
        7. In the Bucket field, enter the name of your S3-compatible object storage bucket.
        8. In the Path field, enter the folder path in your S3-compatible object storage that contains your data file.
    4. Click Deploy.

Verification

  • Confirm that the deployed model is shown in the Models tab for the project, and on the Model Serving page of the dashboard with a checkmark in the Status column.

2.2.2. Viewing a deployed model

To analyze the results of your work, you can view a list of deployed models on Red Hat OpenShift AI. You can also view the current statuses of deployed models and their endpoints.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • If you are using specialized OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins) in OpenShift.

Procedure

  1. From the OpenShift AI dashboard, click Model Serving.

    The Deployed models page opens.

    For each model, the page shows details such as the model name, the project in which the model is deployed, the model-serving runtime that the model uses, and the deployment status.

  2. Optional: For a given model, click the link in the Inference endpoint column to see the inference endpoints for the deployed model.

Verification

  • A list of previously deployed data science models is displayed on the Deployed models page.

2.2.3. Updating the deployment properties of a deployed model

You can update the deployment properties of a model that has been deployed previously. This allows you to change the model’s data connection and name.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • If you are using specialized OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins) in OpenShift.
  • You have deployed a model on OpenShift AI.

Procedure

  1. From the OpenShift AI dashboard, click Model serving.

    The Deployed models page opens.

  2. Click the action menu () beside the model whose deployment properties you want to update and click Edit.

    The Deploy model dialog opens.

  3. Update the deployment properties of the model as follows:

    1. In the Model Name field, enter a new, unique name for the model.
    2. From the Model framework list, select a framework for your model.

      Note

      The Model framework list shows only the frameworks that are supported by the model-serving runtime that you specified when you configured your model server.

    3. To update how you have specified the location of your model, perform one of the following sets of actions:

      • If you previously specified an existing data connection

        1. In the Path field, update the folder path that contains the model in your specified data source.
      • If you previously specified a new data connection

        1. In the Name field, update a unique name for the data connection.
        2. In the Access key field, update the access key ID for the S3-compatible object storage provider.
        3. In the Secret key field, update the secret access key for the S3-compatible object storage account that you specified.
        4. In the Endpoint field, update the endpoint of your S3-compatible object storage bucket.
        5. In the Region field, update the default region of your S3-compatible object storage account.
        6. In the Bucket field, update the name of your S3-compatible object storage bucket.
        7. In the Path field, update the folder path in your S3-compatible object storage that contains your data file.
    4. Click Deploy.

Verification

  • The model whose deployment properties you updated is displayed on the Model Serving page of the dashboard.

2.2.4. Deleting a deployed model

You can delete models you have previously deployed. This enables you to remove deployed models that are no longer required.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • If you are using specialized OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins) in OpenShift.
  • You have deployed a model.

Procedure

  1. From the OpenShift AI dashboard, click Model serving.

    The Deployed models page opens.

  2. Click the action menu () beside the deployed model that you want to delete and click Delete.

    The Delete deployed model dialog opens.

  3. Enter the name of the deployed model in the text field to confirm that you intend to delete it.
  4. Click Delete deployed model.

Verification

  • The model that you deleted is no longer displayed on the Deployed models page.

2.3. Configuring monitoring for the multi-model serving platform

The multi-model serving platform includes model and model server metrics for the ModelMesh component. ModelMesh generates its own set of metrics and does not rely on the underlying model-serving runtimes to provide them. The set of metrics that ModelMesh generates includes metrics for model request rates and timings, model loading and unloading rates, times and sizes, internal queuing delays, capacity and usage, cache state, and least recently-used models. For more information, see ModelMesh metrics.

After you have configured monitoring, you can view metrics for the ModelMesh component.

Prerequisites

  • You have cluster administrator privileges for your OpenShift Container Platform cluster.
  • You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI.
  • You are familiar with creating a config map for monitoring a user-defined workflow. You will perform similar steps in this procedure.
  • You are familiar with enabling monitoring for user-defined projects in OpenShift. You will perform similar steps in this procedure.
  • You have assigned the monitoring-rules-view role to users that will monitor metrics.

Procedure

  1. In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example:

    $ oc login <openshift_cluster_url> -u <admin_username> -p <password>
  2. Define a ConfigMap object in a YAML file called uwm-cm-conf.yaml with the following contents:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          logLevel: debug
          retention: 15d

    The user-workload-monitoring-config object configures the components that monitor user-defined projects. Observe that the retention time is set to the recommended value of 15 days.

  3. Apply the configuration to create the user-workload-monitoring-config object.

    $ oc apply -f uwm-cm-conf.yaml
  4. Define another ConfigMap object in a YAML file called uwm-cm-enable.yaml with the following contents:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        enableUserWorkload: true

    The cluster-monitoring-config object enables monitoring for user-defined projects.

  5. Apply the configuration to create the cluster-monitoring-config object.

    $ oc apply -f uwm-cm-enable.yaml

2.4. Viewing model-serving runtime metrics for the multi-model serving platform

After a cluster administrator has configured monitoring for the multi-model serving platform, non-admin users can use the OpenShift web console to view model-serving runtime metrics for the ModelMesh component.

Prerequisites

  • A cluster administrator has configured monitoring for the multi-model serving platform.
  • You have been assigned the monitoring-rules-view role.
  • You are familiar with how to monitor project metrics in the OpenShift Container Platform web console.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Switch to the Developer perspective.
  3. In the left menu, click Observe.
  4. As described in monitoring project metrics, use the web console to run queries for modelmesh_* metrics.

2.5. Monitoring model performance

2.5.1. Viewing performance metrics for all models on a model server

In OpenShift AI, you can monitor the following metrics for all the models that are deployed on a model server:

  • HTTP requests - The number of HTTP requests that have failed or succeeded for all models on the server.

    Note: You can also view the number of HTTP requests that have failed or succeeded for a specific model, as described in Viewing HTTP request metrics for a deployed model.

  • Average response time (ms) - For all models on the server, the average time it takes the model server to respond to requests.
  • CPU utilization (%) - The percentage of the CPU’s capacity that is currently being used by all models on the server.
  • Memory utilization (%) - The percentage of the system’s memory that is currently being used by all models on the server.

You can specify a time range and a refresh interval for these metrics to help you determine, for example, when the peak usage hours are and how the models are performing at a specified time.

Prerequisites

  • You have installed Red Hat OpenShift AI.
  • On the OpenShift cluster where OpenShift AI is installed, user workload monitoring is enabled.
  • You have logged in to OpenShift AI.
  • If you are using specialized OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins) in OpenShift.
  • You have deployed models on the multi-model serving platform.

Procedure

  1. From the OpenShift AI dashboard navigation menu, click Data Science Projects.

    The Data Science Projects page opens.

  2. Click the name of the project that contains the data science models that you want to monitor.
  3. In the project details page, click the Models tab.
  4. In the row for the model server that you are interested in, click the action menu (⋮) and then select View model server metrics.
  5. Optional: On the metrics page for the model server, set the following options:

    • Time range - Specifies how long to track the metrics. You can select one of these values: 1 hour, 24 hours, 7 days, and 30 days.
    • Refresh interval - Specifies how frequently the graphs on the metrics page are refreshed (to show the latest data). You can select one of these values: 15 seconds, 30 seconds, 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour, 2 hours, and 1 day.
  6. Scroll down to view data graphs for HTTP requests, average response time, CPU utilization, and memory utilization.

Verification

On the metrics page for the model server, the graphs provide performance metric data.

2.5.2. Viewing HTTP request metrics for a deployed model

You can view a graph that illustrates the HTTP requests that have failed or succeeded for a specific model that is deployed on the multi-model serving platform.

Prerequisites

  • You have installed Red Hat OpenShift AI.
  • On the OpenShift cluster where OpenShift AI is installed, user workload monitoring is enabled.
  • Your cluster administrator has not edited the OpenShift AI dashboard configuration to hide the Endpoint Performance tab on the Model Serving page. For more information, see Dashboard configuration options.
  • You have logged in to OpenShift AI.
  • If you are using specialized OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins) in OpenShift.
  • You have deployed models on the multi-model serving platform.

Procedure

  1. From the OpenShift AI dashboard navigation menu, select Model Serving.
  2. On the Deployed models page, select the model that you are interested in.
  3. Optional: On the Endpoint performance tab, set the following options:

    • Time range - Specifies how long to track the metrics. You can select one of these values: 1 hour, 24 hours, 7 days, and 30 days.
    • Refresh interval - Specifies how frequently the graphs on the metrics page are refreshed (to show the latest data). You can select one of these values: 15 seconds, 30 seconds, 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour, 2 hours, and 1 day.

Verification

The Endpoint performance tab shows a graph of the HTTP metrics for the model.