Menu Close

Chapter 6. Administer

6.1. Global configuration

The OpenShift Serverless Operator manages the global configuration of a Knative installation, including propagating values from the KnativeServing and KnativeEventing custom resources to system config maps. Any updates to config maps which are applied manually are overwritten by the Operator. However, modifying the Knative custom resources allows you to set values for these config maps.

Knative has multiple config maps that are named with the prefix config-. All Knative config maps are created in the same namespace as the custom resource that they apply to. For example, if the KnativeServing custom resource is created in the knative-serving namespace, all Knative Serving config maps are also created in this namespace.

The spec.config in the Knative custom resources have one <name> entry for each config map, named config-<name>, with a value which is be used for the config map data.

6.1.1. Configuring the default channel implementation

The default-ch-webhook config map can be used to specify the default channel implementation of Knative Eventing. The default channel implementation can be specified for the entire cluster, as well as for one or more namespaces. Currently the InMemoryChannel and KafkaChannel channel types are supported.

Prerequisites

  • You have cluster administrator permissions on OpenShift Container Platform.
  • You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster.
  • If you want to use Kafka channels as the default channel implementation, you must also install the KnativeKafka CR on your cluster.

Procedure

  • Modify the KnativeEventing custom resource to add configuration details for the default-ch-webhook config map:

    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
    spec:
      config: 1
        default-ch-webhook: 2
          default-ch-config: |
            clusterDefault: 3
              apiVersion: messaging.knative.dev/v1
              kind: InMemoryChannel
              spec:
                delivery:
                  backoffDelay: PT0.5S
                  backoffPolicy: exponential
                  retry: 5
            namespaceDefaults: 4
              my-namespace:
                apiVersion: messaging.knative.dev/v1beta1
                kind: KafkaChannel
                spec:
                  numPartitions: 1
                  replicationFactor: 1
    1
    In spec.config, you can specify the config maps that you want to add modified configurations for.
    2
    The default-ch-webhook config map can be used to specify the default channel implementation for the cluster or for one or more namespaces.
    3
    The cluster-wide default channel type configuration. In this example, the default channel implementation for the cluster is InMemoryChannel.
    4
    The namespace-scoped default channel type configuration. In this example, the default channel implementation for the my-namespace namespace is KafkaChannel.
    Important

    Configuring a namespace-specific default overrides any cluster-wide settings.

6.1.2. Enabling scale-to-zero

Knative Serving provides automatic scaling, or autoscaling, for applications to match incoming demand. You can use the enable-scale-to-zero spec to enable or disable scale-to-zero globally for applications on the cluster.

Prerequisites

  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have cluster administrator permissions.
  • You are using the default Knative Pod Autoscaler. The scale to zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler.

Procedure

  • Modify the enable-scale-to-zero spec in the KnativeServing custom resource (CR):

    Example KnativeServing CR

    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        autoscaler:
          enable-scale-to-zero: "false" 1

    1
    The enable-scale-to-zero spec can be either "true" or "false". If set to true, scale-to-zero is enabled. If set to false, applications are scaled down to the configured minimum scale bound. The default value is "true".

6.1.3. Configuring the scale-to-zero grace period

Knative Serving provides automatic scaling down to zero pods for applications. You can use the scale-to-zero-grace-period spec to define an upper bound time limit that Knative waits for scale-to-zero machinery to be in place before the last replica of an application is removed.

Prerequisites

  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have cluster administrator permissions.
  • You are using the default Knative Pod Autoscaler. The scale to zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler.

Procedure

  • Modify the scale-to-zero-grace-period spec in the KnativeServing custom resource (CR):

    Example KnativeServing CR

    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        autoscaler:
          scale-to-zero-grace-period: "30s" 1

    1
    The grace period time in seconds. The default value is 30 seconds.

6.1.4. Overriding system deployment configurations

You can override the default configurations for some specific deployments by modifying the deployments spec in the KnativeServing custom resource (CR). Currently, overriding default configuration settings is supported for the resources, replicas, labels, annotations, and nodeSelector fields.

In the following example, a KnativeServing CR overrides the webhook deployment so that:

  • The deployment has specified CPU and memory resource limits.
  • The deployment has 3 replicas.
  • The example-label: label label is added.
  • The example-annotation: annotation annotation is added.
  • The nodeSelector field is set to select nodes with the disktype: hdd label.
Note

The KnativeServing CR label and annotation settings override the deployment’s labels and annotations for both the deployment itself and the resulting pods.

KnativeServing CR example

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: ks
  namespace: knative-serving
spec:
  high-availability:
    replicas: 2
  deployments:
  - name: webhook
    resources:
    - container: webhook
      requests:
        cpu: 300m
        memory: 60Mi
      limits:
        cpu: 1000m
        memory: 1000Mi
    replicas: 3
    labels:
      example-label: label
    annotations:
      example-annotation: annotation
    nodeSelector:
      disktype: hdd

6.1.5. Configuring the EmptyDir extension

emptyDir volumes are empty volumes that are created when a pod is created, and are used to provide temporary working disk space. emptyDir volumes are deleted when the pod they were created for is deleted.

The kubernetes.podspec-volumes-emptydir extension controls whether emptyDir volumes can be used with Knative Serving. To enable using emptyDir volumes, you must modify the KnativeServing custom resource (CR) to include the following YAML:

Example KnativeServing CR

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: knative-serving
spec:
  config:
    features:
      kubernetes.podspec-volumes-emptydir: enabled
...

6.1.6. HTTPS redirection global settings

HTTPS redirection provides redirection for incoming HTTP requests. These redirected HTTP requests are encrypted. You can enable HTTPS redirection for all services on the cluster by configuring the httpProtocol spec for the KnativeServing custom resource (CR).

Example KnativeServing CR that enables HTTPS redirection

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: knative-serving
spec:
  config:
    network:
      httpProtocol: "redirected"
...

6.1.7. Setting the URL scheme for external routes

The URL scheme of external routes defaults to HTTPS for enhanced security. This scheme is determined by the default-external-scheme key in the KnativeServing custom resource (CR) spec.

Default spec

...
spec:
  config:
    network:
      default-external-scheme: "https"
...

You can override the default spec to use HTTP by modifying the default-external-scheme key:

HTTP override spec

...
spec:
  config:
    network:
      default-external-scheme: "http"
...

6.1.8. Setting the Kourier Gateway service type

The Kourier Gateway is exposed by default as the ClusterIP service type. This service type is determined by the service-type ingress spec in the KnativeServing custom resource (CR).

Default spec

...
spec:
  ingress:
    kourier:
      service-type: ClusterIP
...

You can override the default service type to use a load balancer service type instead by modifying the service-type spec:

LoadBalancer override spec

...
spec:
  ingress:
    kourier:
      service-type: LoadBalancer
...

6.1.9. Enabling PVC support

Some serverless applications need permanent data storage. To achieve this, you can configure persistent volume claims (PVCs) for your Knative services.

Important

PVC support for Knative services is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Procedure

  1. To enable Knative Serving to use PVCs and write to them, modify the KnativeServing custom resource (CR) to include the following YAML:

    Enabling PVCs with write access

    ...
    spec:
      config:
        features:
          "kubernetes.podspec-persistent-volume-claim": enabled
          "kubernetes.podspec-persistent-volume-write": enabled
    ...

    • The kubernetes.podspec-persistent-volume-claim extension controls whether persistent volumes (PVs) can be used with Knative Serving.
    • The kubernetes.podspec-persistent-volume-write extension controls whether PVs are available to Knative Serving with the write access.
  2. To claim a PV, modify your service to include the PV configuration. For example, you might have a persistent volume claim with the following configuration:

    Note

    Use the storage class that supports the access mode that you are requesting. For example, you can the ocs-storagecluster-cephfs class for the ReadWriteMany access mode.

    PersistentVolumeClaim configuration

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: example-pv-claim
      namespace: my-ns
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ocs-storagecluster-cephfs
      resources:
        requests:
          storage: 1Gi

    In this case, to claim a PV with write access, modify your service as follows:

    Knative service PVC configuration

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      namespace: my-ns
    ...
    spec:
     template:
       spec:
         containers:
             ...
             volumeMounts: 1
               - mountPath: /data
                 name: mydata
                 readOnly: false
         volumes:
           - name: mydata
             persistentVolumeClaim: 2
               claimName: example-pv-claim
               readOnly: false 3

    1
    Volume mount specification.
    2
    Persistent volume claim specification.
    3
    Flag that enables read-only access.
    Note

    To successfully use persistent storage in Knative services, you need additional configuration, such as the user permissions for the Knative container user.

6.1.10. Enabling init containers

Init containers are specialized containers that are run before application containers in a pod. They are generally used to implement initialization logic for an application, which may include running setup scripts or downloading required configurations. You can enable the use of init containers for Knative services by modifying the KnativeServing custom resource (CR).

Important

Init containers for Knative services is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Note

Init containers may cause longer application start-up times and should be used with caution for serverless applications, which are expected to scale up and down frequently.

Prerequisites

  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have cluster administrator permissions.

Procedure

  • Enable the use of init containers by adding the kubernetes.podspec-init-containers flag to the KnativeServing CR:

    Example KnativeServing CR

    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        features:
          kubernetes.podspec-init-containers: enabled
    ...

6.1.11. Tag-to-digest resolution

If the Knative Serving controller has access to the container registry, Knative Serving resolves image tags to a digest when you create a revision of a service. This is known as tag-to-digest resolution, and helps to provide consistency for deployments.

To give the controller access to the container registry on OpenShift Container Platform, you must create a secret and then configure controller custom certificates. You can configure controller custom certificates by modifying the controller-custom-certs spec in the KnativeServing custom resource (CR). The secret must reside in the same namespace as the KnativeServing CR.

If a secret is not included in the KnativeServing CR, this setting defaults to using public key infrastructure (PKI). When using PKI, the cluster-wide certificates are automatically injected into the Knative Serving controller by using the config-service-sa config map. The OpenShift Serverless Operator populates the config-service-sa config map with cluster-wide certificates and mounts the config map as a volume to the controller.

6.1.11.1. Configuring tag-to-digest resolution by using a secret

If the controller-custom-certs spec uses the Secret type, the secret is mounted as a secret volume. Knative components consume the secret directly, assuming that the secret has the required certificates.

Prerequisites

  • You have cluster administrator permissions on OpenShift Container Platform.
  • You have installed the OpenShift Serverless Operator and Knative Serving on your cluster.

Procedure

  1. Create a secret:

    Example command

    $ oc -n knative-serving create secret generic custom-secret --from-file=<secret_name>.crt=<path_to_certificate>

  2. Configure the controller-custom-certs spec in the KnativeServing custom resource (CR) to use the Secret type:

    Example KnativeServing CR

    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
      controller-custom-certs:
        name: custom-secret
        type: Secret

6.1.12. Additional resources

6.2. Configuring Knative Kafka

Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities.

In addition to the Knative Eventing components that are provided as part of a core OpenShift Serverless installation, cluster administrators can install the KnativeKafka custom resource (CR).

Note

Knative Kafka is not currently supported for IBM Z and IBM Power Systems.

The KnativeKafka CR provides users with additional options, such as:

  • Kafka source
  • Kafka channel
  • Kafka broker (Technology Preview)
  • Kafka sink (Technology Preview)

6.2.1. Installing Knative Kafka

Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Knative Kafka functionality is available in an OpenShift Serverless installation if you have installed the KnativeKafka custom resource.

Prerequisites

  • You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster.
  • You have access to a Red Hat AMQ Streams cluster.
  • Install the OpenShift CLI (oc) if you want to use the verification steps.
  • You have cluster administrator permissions on OpenShift Container Platform.
  • You are logged in to the OpenShift Container Platform web console.

Procedure

  1. In the Administrator perspective, navigate to OperatorsInstalled Operators.
  2. Check that the Project dropdown at the top of the page is set to Project: knative-eventing.
  3. In the list of Provided APIs for the OpenShift Serverless Operator, find the Knative Kafka box and click Create Instance.
  4. Configure the KnativeKafka object in the Create Knative Kafka page.

    Important

    To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the enabled switch for the options you want to use to true. These switches are set to false by default. Additionally, to use the Kafka channel, broker, or sink you must specify the bootstrap servers.

    Example KnativeKafka custom resource

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
        name: knative-kafka
        namespace: knative-eventing
    spec:
        channel:
            enabled: true 1
            bootstrapServers: <bootstrap_servers> 2
        source:
            enabled: true 3
        broker:
            enabled: true 4
            defaultConfig:
                bootstrapServers: <bootstrap_servers> 5
                numPartitions: <num_partitions> 6
                replicationFactor: <replication_factor> 7
        sink:
            enabled: true 8

    1
    Enables developers to use the KafkaChannel channel type in the cluster.
    2
    A comma-separated list of bootstrap servers from your AMQ Streams cluster.
    3
    Enables developers to use the KafkaSource event source type in the cluster.
    4
    Enables developers to use the Knative Kafka broker implementation in the cluster.
    5
    A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster.
    6
    Defines the number of partitions of the Kafka topics, backed by the Broker objects. The default is 10.
    7
    Defines the replication factor of the Kafka topics, backed by the Broker objects. The default is 3.
    8
    Enables developers to use a Kafka sink in the cluster.
    Note

    The replicationFactor value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster.

    1. Using the form is recommended for simpler configurations that do not require full control of KnativeKafka object creation.
    2. Editing the YAML is recommended for more complex configurations that require full control of KnativeKafka object creation. You can access the YAML by clicking the Edit YAML link in the top right of the Create Knative Kafka page.
  5. Click Create after you have completed any of the optional configurations for Kafka. You are automatically directed to the Knative Kafka tab where knative-kafka is in the list of resources.

Verification

  1. Click on the knative-kafka resource in the Knative Kafka tab. You are automatically directed to the Knative Kafka Overview page.
  2. View the list of Conditions for the resource and confirm that they have a status of True.

    Kafka Knative Overview page showing Conditions

    If the conditions have a status of Unknown or False, wait a few moments to refresh the page.

  3. Check that the Knative Kafka resources have been created:

    $ oc get pods -n knative-eventing

    Example output

    NAME                                        READY   STATUS    RESTARTS   AGE
    kafka-broker-dispatcher-7769fbbcbb-xgffn    2/2     Running   0          44s
    kafka-broker-receiver-5fb56f7656-fhq8d      2/2     Running   0          44s
    kafka-channel-dispatcher-84fd6cb7f9-k2tjv   2/2     Running   0          44s
    kafka-channel-receiver-9b7f795d5-c76xr      2/2     Running   0          44s
    kafka-controller-6f95659bf6-trd6r           2/2     Running   0          44s
    kafka-source-dispatcher-6bf98bdfff-8bcsn    2/2     Running   0          44s
    kafka-webhook-eventing-68dc95d54b-825xs     2/2     Running   0          44s

6.2.2. Configuring TLS authentication for Kafka brokers

Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. You can set up TLS for Kafka brokers by modifying the KnativeKafka custom resource (CR).

Prerequisites

  • You have cluster administrator permissions on OpenShift Container Platform.
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have a Kafka cluster CA certificate stored as a .pem file.
  • You have a Kafka cluster client certificate and a key stored as .pem files.
  • Install the OpenShift (oc) CLI.

Procedure

  1. Create the certificate files as a secret in the knative-eventing namespace:

    $ oc create secret -n knative-eventing generic <secret_name> \
      --from-literal=protocol=SSL \
      --from-file=ca.crt=caroot.pem \
      --from-file=user.crt=certificate.pem \
      --from-file=user.key=key.pem
    Important

    Use the key names ca.crt, user.crt, and user.key. Do not change them.

  2. Edit the KnativeKafka CR and add a reference to your secret in the broker spec:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      broker:
        enabled: true
        defaultConfig:
          authSecretName: <secret_name>
    ...

6.2.3. Configuring SASL authentication for Kafka brokers

Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster, otherwise events cannot be produced or consumed. You can set up SASL for Kafka brokers by modifying the KnativeKafka custom resource (CR).

Prerequisites

  • You have cluster administrator permissions on OpenShift Container Platform.
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have a username and password for a Kafka cluster.
  • You have chosen the SASL mechanism to use, for example PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.
  • If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster.
  • Install the OpenShift CLI (oc).
Note

It is recommended to enable TLS in addition to SASL.

Procedure

  1. Create the certificate files as a secret in the knative-eventing namespace:

    $ oc create secret -n knative-eventing generic <secret_name> \
      --from-literal=protocol=SASL_SSL \
      --from-literal=sasl.mechanism=<sasl_mechanism> \
      --from-file=ca.crt=caroot.pem \
      --from-literal=password="SecretPassword" \
      --from-literal=user="my-sasl-user"
    Important

    Use the key names ca.crt, password, and sasl.mechanism. Do not change them.

  2. Edit the KnativeKafka CR and add a reference to your secret in the broker spec:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      broker:
        enabled: true
        defaultConfig:
          authSecretName: <secret_name>
    ...

6.2.4. Additional resources

6.3. Serverless components in the Administrator perspective

If you do not want to switch to the Developer perspective in the OpenShift Container Platform web console or use the Knative (kn) CLI or YAML files, you can create Knative components by using the Administator perspective of the OpenShift Container Platform web console.

6.3.1. Creating serverless applications using the Administrator perspective

Serverless applications are created and deployed as Kubernetes services, defined by a route and a configuration, and contained in a YAML file. To deploy a serverless application using OpenShift Serverless, you must create a Knative Service object.

Example Knative Service object YAML file

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello 1
  namespace: default 2
spec:
  template:
    spec:
      containers:
        - image: docker.io/openshift/hello-openshift 3
          env:
            - name: RESPONSE 4
              value: "Hello Serverless!"

1
The name of the application.
2
The namespace the application uses.
3
The image of the application.
4
The environment variable printed out by the sample application.

After the service is created and the application is deployed, Knative creates an immutable revision for this version of the application. Knative also performs network programming to create a route, ingress, service, and load balancer for your application and automatically scales your pods up and down based on traffic.

Prerequisites

To create serverless applications using the Administrator perspective, ensure that you have completed the following steps.

  • The OpenShift Serverless Operator and Knative Serving are installed.
  • You have logged in to the web console and are in the Administrator perspective.

Procedure

  1. Navigate to the ServerlessServing page.
  2. In the Create list, select Service.
  3. Manually enter YAML or JSON definitions, or by dragging and dropping a file into the editor.
  4. Click Create.

6.3.2. Additional resources

6.4. Integrating Service Mesh with OpenShift Serverless

Using Service Mesh with OpenShift Serverless enables developers to configure additional networking and routing options.

The OpenShift Serverless Operator provides Kourier as the default ingress for Knative. However, you can use Service Mesh with OpenShift Serverless whether Kourier is enabled or not. Integrating with Kourier disabled allows you to configure additional networking and routing options that the Kourier ingress does not support.

Important

OpenShift Serverless only supports the use of Red Hat OpenShift Service Mesh functionality that is explicitly documented in this guide, and does not support other undocumented features.

6.4.1. Integrating Service Mesh with OpenShift Serverless natively

Integrating Service Mesh with OpenShift Serverless natively, without Kourier, allows you to use additional networking and routing options that are not supported by the default Kourier ingress, such as mTLS functionality.

The examples in the following procedures use the domain example.com. The example certificate for this domain is used as a certificate authority (CA) that signs the subdomain certificate.

To complete and verify these procedures in your deployment, you need either a certificate signed by a widely trusted public CA or a CA provided by your organization. Example commands must be adjusted according to your domain, subdomain, and CA.

You must configure the wildcard certificate to match the domain of your OpenShift Container Platform cluster. For example, if your OpenShift Container Platform console address is https://console-openshift-console.apps.openshift.example.com, you must configure the wildcard certificate so that the domain is *.apps.openshift.example.com. For more information about configuring wildcard certificates, see the following topic about Creating a certificate to encrypt incoming external traffic.

If you want to use any domain name, including those which are not subdomains of the default OpenShift Container Platform cluster domain, you must set up domain mapping for those domains. For more information, see the OpenShift Serverless documentation about Creating a custom domain mapping.

6.4.1.1. Creating a certificate to encrypt incoming external traffic

By default, the Service Mesh mTLS feature only secures traffic inside of the Service Mesh itself, between the ingress gateway and individual pods that have sidecars. To encrypt traffic as it flows into the OpenShift Container Platform cluster, you must generate a certificate before you enable the OpenShift Serverless and Service Mesh integration.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have installed the OpenShift Serverless Operator and Knative Serving.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. Create a root certificate and private key that signs the certificates for your Knative services:

    $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
        -subj '/O=Example Inc./CN=example.com' \
        -keyout root.key \
        -out root.crt
  2. Create a wildcard certificate:

    $ openssl req -nodes -newkey rsa:2048 \
        -subj "/CN=*.apps.openshift.example.com/O=Example Inc." \
        -keyout wildcard.key \
        -out wildcard.csr
  3. Sign the wildcard certificate:

    $ openssl x509 -req -days 365 -set_serial 0 \
        -CA root.crt \
        -CAkey root.key \
        -in wildcard.csr \
        -out wildcard.crt
  4. Create a secret by using the wildcard certificate:

    $ oc create -n istio-system secret tls wildcard-certs \
        --key=wildcard.key \
        --cert=wildcard.crt

    This certificate is picked up by the gateways created when you integrate OpenShift Serverless with Service Mesh, so that the ingress gateway serves traffic with this certificate.

6.4.1.2. Integrating Service Mesh with OpenShift Serverless

You can integrate Service Mesh with OpenShift Serverless without using Kourier by completing the following procedure.

Prerequisites

  • You have installed Red Hat OpenShift Service Mesh. OpenShift Serverless with Service Mesh only is supported for use with Red Hat OpenShift Service Mesh version 2.0.5 or higher.
  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have installed the OpenShift Serverless Operator.

    Important

    Do not install the Knative Serving component before completing the following procedures. There are additional steps required when creating the KnativeServing custom resource defintion (CRD) to integrate Knative Serving with Service Mesh, which are not covered in the general Knative Serving installation procedure of the Administration guide.

  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. Create a ServiceMeshControlPlane object in the istio-system namespace. If you want to use the mTLS functionality, this must be enabled for the istio-system namespace.
  2. Add the namespaces that you would like to integrate with Service Mesh to the ServiceMeshMemberRoll object as members:

    apiVersion: maistra.io/v1
    kind: ServiceMeshMemberRoll
    metadata:
      name: default
      namespace: istio-system
    spec:
      members: 1
        - knative-serving
        - <namespace>
    1
    A list of namespaces to be integrated with Service Mesh.
    Important

    This list of namespaces must include the knative-serving namespace.

  3. Apply the ServiceMeshMemberRoll resource:

    $ oc apply -f <filename>
  4. Create the necessary gateways so that Service Mesh can accept traffic:

    Example knative-local-gateway object using HTTP

    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
      name: knative-ingress-gateway
      namespace: knative-serving
    spec:
      selector:
        istio: ingressgateway
      servers:
        - port:
            number: 443
            name: https
            protocol: HTTPS
          hosts:
            - "*"
          tls:
            mode: SIMPLE
            credentialName: <wildcard_certs> 1
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
     name: knative-local-gateway
     namespace: knative-serving
    spec:
     selector:
       istio: ingressgateway
     servers:
       - port:
           number: 8081
           name: http
           protocol: HTTP 2
         hosts:
           - "*"
    ---
    apiVersion: v1
    kind: Service
    metadata:
     name: knative-local-gateway
     namespace: istio-system
     labels:
       experimental.istio.io/disable-gateway-port-translation: "true"
    spec:
     type: ClusterIP
     selector:
       istio: ingressgateway
     ports:
       - name: http2
         port: 80
         targetPort: 8081

    1
    Add the name of your wildcard certificate.
    2
    The knative-local-gateway serves HTTP traffic. Using HTTP means that traffic coming from outside of Service Mesh, but using an internal hostname, such as example.default.svc.cluster.local, is not encrypted. You can set up encryption for this path by creating another wildcard certificate and an additional gateway that uses a different protocol spec.

    Example knative-local-gateway object using HTTPS

    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
      name: knative-local-gateway
      namespace: knative-serving
    spec:
      selector:
        istio: ingressgateway
      servers:
        - port:
            number: 443
            name: https
            protocol: HTTPS
          hosts:
            - "*"
          tls:
            mode: SIMPLE
            credentialName: <wildcard_certs>

  5. Apply the Gateway resources:

    $ oc apply -f <filename>
  6. Install Knative Serving by creating the following KnativeServing custom resource definition (CRD), which also enables the Istio integration:

    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
      ingress:
        istio:
          enabled: true 1
      deployments: 2
      - name: activator
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: autoscaler
        annotations:
          "sidecar.istio.io/inject": "true"
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
    1
    Enables Istio integration.
    2
    Enables sidecar injection for Knative Serving data plane pods.
  7. Apply the KnativeServing resource:

    $ oc apply -f <filename>
  8. Create a Knative Service that has sidecar injection enabled and uses a pass-through route:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: <service_name>
      namespace: <namespace> 1
      annotations:
        serving.knative.openshift.io/enablePassthrough: "true" 2
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true" 3
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
        spec:
          containers:
          - image: <image_url>
    1
    A namespace that is part of the Service Mesh member roll.
    2
    Instructs Knative Serving to generate an OpenShift Container Platform pass-through enabled route, so that the certificates you have generated are served through the ingress gateway directly.
    3
    Injects Service Mesh sidecars into the Knative service pods.
  9. Apply the Service resource:

    $ oc apply -f <filename>

Verification

  • Access your serverless application by using a secure connection that is now trusted by the CA:

    $ curl --cacert root.crt <service_url>

    Example command

    $ curl --cacert root.crt https://hello-default.apps.openshift.example.com

    Example output

    Hello Openshift!

6.4.1.3. Enabling Knative Serving metrics when using Service Mesh with mTLS

If Service Mesh is enabled with mTLS, metrics for Knative Serving are disabled by default, because Service Mesh prevents Prometheus from scraping metrics. This section shows how to enable Knative Serving metrics when using Service Mesh and mTLS.

Prerequisites

  • You have installed the OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled.
  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. Specify prometheus as the metrics.backend-destination in the observability spec of the Knative Serving custom resource (CR):

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        observability:
          metrics.backend-destination: "prometheus"
    ...

    This step prevents metrics from being disabled by default.

  2. Apply the following network policy to allow traffic from the Prometheus namespace:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-from-openshift-monitoring-ns
      namespace: knative-serving
    spec:
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              name: "openshift-monitoring"
      podSelector: {}
    ...
  3. Modify and reapply the default Service Mesh control plane in the istio-system namespace, so that it includes the following spec:

    ...
    spec:
      proxy:
        networking:
          trafficControl:
            inbound:
              excludedPorts:
              - 8444
    ...

6.4.2. Integrating Service Mesh with OpenShift Serverless when Kourier is enabled

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have installed Red Hat OpenShift Service Mesh. OpenShift Serverless with Service Mesh and Kourier is supported for use with both Red Hat OpenShift Service Mesh versions 1.x and 2.x.

Procedure

  1. Add the namespaces that you would like to integrate with Service Mesh to the ServiceMeshMemberRoll object as members:

    apiVersion: maistra.io/v1
    kind: ServiceMeshMemberRoll
    metadata:
      name: default
      namespace: istio-system
    spec:
      members:
        - <namespace> 1
    ...
    1
    A list of namespaces to be integrated with Service Mesh.
  2. Apply the ServiceMeshMemberRoll resource:

    $ oc apply -f <filename>
  3. Create a network policy that permits traffic flow from Knative system pods to Knative services:

    1. For each namespace that you want to integrate with Service Mesh, create a NetworkPolicy resource:

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-serving-system-namespace
        namespace: <namespace> 1
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                knative.openshift.io/part-of: "openshift-serverless"
        podSelector: {}
        policyTypes:
        - Ingress
      ...
      1
      Add the namespace that you want to integrate with Service Mesh.
      Note

      The knative.openshift.io/part-of: "openshift-serverless" label was added in OpenShift Serverless 1.22.0. If you are using OpenShift Serverless 1.21.1 or earlier, add the knative.openshift.io/part-of label to the knative-serving and knative-serving-ingress namespaces.

      Add the label to the knative-serving namespace:

      $ oc label namespace knative-serving knative.openshift.io/part-of=openshift-serverless

      Add the label to the knative-serving-ingress namespace:

      $ oc label namespace knative-serving-ingress knative.openshift.io/part-of=openshift-serverless
    2. Apply the NetworkPolicy resource:

      $ oc apply -f <filename>

6.5. Serverless administrator metrics

Metrics enable cluster administrators to monitor how OpenShift Serverless cluster components and workloads are performing.

You can view different metrics for OpenShift Serverless by navigating to Dashboards in the OpenShift Container Platform web console Administrator perspective.

6.5.1. Prerequisites

  • See the OpenShift Container Platform documentation on Managing metrics for information about enabling metrics for your cluster.
  • To view metrics for Knative components on OpenShift Container Platform, you need cluster administrator permissions, and access to the web console Administrator perspective.
Warning

If Service Mesh is enabled with mTLS, metrics for Knative Serving are disabled by default because Service Mesh prevents Prometheus from scraping metrics.

For information about resolving this issue, see Enabling Knative Serving metrics when using Service Mesh with mTLS.

Scraping the metrics does not affect autoscaling of a Knative service, because scraping requests do not go through the activator. Consequently, no scraping takes place if no pods are running.

6.5.2. Controller metrics

The following metrics are emitted by any component that implements a controller logic. These metrics show details about reconciliation operations and the work queue behavior upon which reconciliation requests are added to the work queue.

Metric nameDescriptionTypeTagsUnit

work_queue_depth

The depth of the work queue.

Gauge

reconciler

Integer (no units)

reconcile_count

The number of reconcile operations.

Counter

reconciler, success

Integer (no units)

reconcile_latency

The latency of reconcile operations.

Histogram

reconciler, success

Milliseconds

workqueue_adds_total

The total number of add actions handled by the work queue.

Counter

name

Integer (no units)

workqueue_queue_latency_seconds

The length of time an item stays in the work queue before being requested.

Histogram

name

Seconds

workqueue_retries_total

The total number of retries that have been handled by the work queue.

Counter

name

Integer (no units)

workqueue_work_duration_seconds

The length of time it takes to process and item from the work queue.

Histogram

name

Seconds

workqueue_unfinished_work_seconds

The length of time that outstanding work queue items have been in progress.

Histogram

name

Seconds

workqueue_longest_running_processor_seconds

The length of time that the longest outstanding work queue items has been in progress.

Histogram

name

Seconds

6.5.3. Webhook metrics

Webhook metrics report useful information about operations. For example, if a large number of operations fail, this might indicate an issue with a user-created resource.

Metric nameDescriptionTypeTagsUnit

request_count

The number of requests that are routed to the webhook.

Counter

admission_allowed, kind_group, kind_kind, kind_version, request_operation, resource_group, resource_namespace, resource_resource, resource_version

Integer (no units)

request_latencies

The response time for a webhook request.

Histogram

admission_allowed, kind_group, kind_kind, kind_version, request_operation, resource_group, resource_namespace, resource_resource, resource_version

Milliseconds

6.5.4. Knative Eventing metrics

Cluster administrators can view the following metrics for Knative Eventing components.

By aggregating the metrics from HTTP code, events can be separated into two categories; successful events (2xx) and failed events (5xx).

6.5.4.1. Broker ingress metrics

You can use the following metrics to debug the broker ingress, see how it is performing, and see which events are being dispatched by the ingress component.

Metric nameDescriptionTypeTagsUnit

event_count

Number of events received by a broker.

Counter

broker_name, event_type, namespace_name, response_code, response_code_class, unique_name

Integer (no units)

event_dispatch_latencies

The time taken to dispatch an event to a channel.

Histogram

broker_name, event_type, namespace_name, response_code, response_code_class, unique_name

Milliseconds

6.5.4.2. Broker filter metrics

You can use the following metrics to debug broker filters, see how they are performing, and see which events are being dispatched by the filters. You can also measure the latency of the filtering action on an event.

Metric nameDescriptionTypeTagsUnit

event_count

Number of events received by a broker.

Counter

broker_name, container_name, filter_type, namespace_name, response_code, response_code_class, trigger_name, unique_name

Integer (no units)

event_dispatch_latencies

The time taken to dispatch an event to a channel.

Histogram

broker_name, container_name, filter_type, namespace_name, response_code, response_code_class, trigger_name, unique_name

Milliseconds

event_processing_latencies

The time it takes to process an event before it is dispatched to a trigger subscriber.

Histogram

broker_name, container_name, filter_type, namespace_name, trigger_name, unique_name

Milliseconds

6.5.4.3. InMemoryChannel dispatcher metrics

You can use the following metrics to debug InMemoryChannel channels, see how they are performing, and see which events are being dispatched by the channels.

Metric nameDescriptionTypeTagsUnit

event_count

Number of events dispatched by InMemoryChannel channels.

Counter

broker_name, container_name, filter_type, namespace_name, response_code, response_code_class, trigger_name, unique_name

Integer (no units)

event_dispatch_latencies

The time taken to dispatch an event from an InMemoryChannel channel.

Histogram

broker_name, container_name, filter_type, namespace_name, response_code, response_code_class, trigger_name, unique_name

Milliseconds

6.5.4.4. Event source metrics

You can use the following metrics to verify that events have been delivered from the event source to the connected event sink.

Metric nameDescriptionTypeTagsUnit

event_count

Number of events sent by the event source.

Counter

broker_name, container_name, filter_type, namespace_name, response_code, response_code_class, trigger_name, unique_name

Integer (no units)

retry_event_count

Number of retried events sent by the event source after initially failing to be delivered.

Counter

event_source, event_type, name, namespace_name, resource_group, response_code, response_code_class, response_error, response_timeout

Integer (no units)

6.5.5. Knative Serving metrics

Cluster administrators can view the following metrics for Knative Serving components.

6.5.5.1. Activator metrics

You can use the following metrics to understand how applications respond when traffic passes through the activator.

Metric nameDescriptionTypeTagsUnit

request_concurrency

The number of concurrent requests that are routed to the activator, or average concurrency over a reporting period.

Gauge

configuration_name, container_name, namespace_name, pod_name, revision_name, service_name

Integer (no units)

request_count

The number of requests that are routed to activator. These are requests that have been fulfilled from the activator handler.

Counter

configuration_name, container_name, namespace_name, pod_name, response_code, response_code_class, revision_name, service_name,

Integer (no units)

request_latencies

The response time in milliseconds for a fulfilled, routed request.

Histogram

configuration_name, container_name, namespace_name, pod_name, response_code, response_code_class, revision_name, service_name

Milliseconds

6.5.5.2. Autoscaler metrics

The autoscaler component exposes a number of metrics related to autoscaler behavior for each revision. For example, at any given time, you can monitor the targeted number of pods the autoscaler tries to allocate for a service, the average number of requests per second during the stable window, or whether the autoscaler is in panic mode if you are using the Knative pod autoscaler (KPA).

Metric nameDescriptionTypeTagsUnit

desired_pods

The number of pods the autoscaler tries to allocate for a service.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

excess_burst_capacity

The excess burst capacity served over the stable window.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

stable_request_concurrency

The average number of requests for each observed pod over the stable window.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

panic_request_concurrency

The average number of requests for each observed pod over the panic window.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

target_concurrency_per_pod

The number of concurrent requests that the autoscaler tries to send to each pod.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

stable_requests_per_second

The average number of requests-per-second for each observed pod over the stable window.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

panic_requests_per_second

The average number of requests-per-second for each observed pod over the panic window.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

target_requests_per_second

The number of requests-per-second that the autoscaler targets for each pod.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

panic_mode

This value is 1 if the autoscaler is in panic mode, or 0 if the autoscaler is not in panic mode.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

requested_pods

The number of pods that the autoscaler has requested from the Kubernetes cluster.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

actual_pods

The number of pods that are allocated and currently have a ready state.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

not_ready_pods

The number of pods that have a not ready state.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

pending_pods

The number of pods that are currently pending.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

terminating_pods

The number of pods that are currently terminating.

Gauge

configuration_name, namespace_name, revision_name, service_name

Integer (no units)

6.5.5.3. Go runtime metrics

Each Knative Serving control plane process emits a number of Go runtime memory statistics (MemStats).

Note

The name tag for each metric is an empty tag.

Metric nameDescriptionTypeTagsUnit

go_alloc

The number of bytes of allocated heap objects. This metric is the same as heap_alloc.

Gauge

name

Integer (no units)

go_total_alloc

The cumulative bytes allocated for heap objects.

Gauge

name

Integer (no units)

go_sys

The total bytes of memory obtained from the operating system.

Gauge

name

Integer (no units)

go_lookups

The number of pointer lookups performed by the runtime.

Gauge

name

Integer (no units)

go_mallocs

The cumulative count of heap objects allocated.

Gauge

name

Integer (no units)

go_frees

The cumulative count of heap objects that have been freed.

Gauge

name

Integer (no units)

go_heap_alloc

The number of bytes of allocated heap objects.

Gauge

name

Integer (no units)

go_heap_sys

The number of bytes of heap memory obtained from the operating system.

Gauge

name

Integer (no units)

go_heap_idle

The number of bytes in idle, unused spans.

Gauge

name

Integer (no units)

go_heap_in_use

The number of bytes in spans that are currently in use.

Gauge

name

Integer (no units)

go_heap_released

The number of bytes of physical memory returned to the operating system.

Gauge

name

Integer (no units)

go_heap_objects

The number of allocated heap objects.

Gauge

name

Integer (no units)

go_stack_in_use

The number of bytes in stack spans that are currently in use.

Gauge

name

Integer (no units)

go_stack_sys

The number of bytes of stack memory obtained from the operating system.

Gauge

name

Integer (no units)

go_mspan_in_use

The number of bytes of allocated mspan structures.

Gauge

name

Integer (no units)

go_mspan_sys

The number of bytes of memory obtained from the operating system for mspan structures.

Gauge

name

Integer (no units)

go_mcache_in_use

The number of bytes of allocated mcache structures.

Gauge

name

Integer (no units)

go_mcache_sys

The number of bytes of memory obtained from the operating system for mcache structures.

Gauge

name

Integer (no units)

go_bucket_hash_sys

The number of bytes of memory in profiling bucket hash tables.

Gauge

name

Integer (no units)

go_gc_sys

The number of bytes of memory in garbage collection metadata.

Gauge

name

Integer (no units)

go_other_sys

The number of bytes of memory in miscellaneous, off-heap runtime allocations.

Gauge

name

Integer (no units)

go_next_gc

The target heap size of the next garbage collection cycle.

Gauge

name

Integer (no units)

go_last_gc

The time that the last garbage collection was completed in Epoch or Unix time.

Gauge

name

Nanoseconds

go_total_gc_pause_ns

The cumulative time in garbage collection stop-the-world pauses since the program started.

Gauge

name

Nanoseconds

go_num_gc

The number of completed garbage collection cycles.

Gauge

name

Integer (no units)

go_num_forced_gc

The number of garbage collection cycles that were forced due to an application calling the garbage collection function.

Gauge

name

Integer (no units)

go_gc_cpu_fraction

The fraction of the available CPU time of the program that has been used by the garbage collector since the program started.

Gauge

name

Integer (no units)

6.6. Using metering with OpenShift Serverless

Important

Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

As a cluster administrator, you can use metering to analyze what is happening in your OpenShift Serverless cluster.

For more information about metering on OpenShift Container Platform, see About metering.

Note

Metering is not currently supported for IBM Z and IBM Power Systems.

6.6.1. Installing metering

For information about installing metering on OpenShift Container Platform, see Installing Metering.

6.6.2. Data source reports for Knative Serving metering

The following data source reports are examples of how Knative Serving can be used with OpenShift Container Platform metering.

6.6.2.1. Data source report for CPU usage in Knative Serving

This data source report provides the accumulated CPU seconds used per Knative service over the report time period.

Example YAML file

apiVersion: metering.openshift.io/v1
kind: ReportDataSource
metadata:
  name: knative-service-cpu-usage
spec:
  prometheusMetricsImporter:
    query: >
      sum
          by(namespace,
             label_serving_knative_dev_service,
             label_serving_knative_dev_revision)
          (
            label_replace(rate(container_cpu_usage_seconds_total{container!="POD",container!="",pod!=""}[1m]), "pod", "$1", "pod", "(.*)")
            *
            on(pod, namespace)
            group_left(label_serving_knative_dev_service, label_serving_knative_dev_revision)
            kube_pod_labels{label_serving_knative_dev_service!=""}
          )

6.6.2.2. Data source report for memory usage in Knative Serving

This data source report provides the average memory consumption per Knative service over the report time period.

Example YAML file

apiVersion: metering.openshift.io/v1
kind: ReportDataSource
metadata:
  name: knative-service-memory-usage
spec:
  prometheusMetricsImporter:
    query: >
      sum
          by(namespace,
             label_serving_knative_dev_service,
             label_serving_knative_dev_revision)
          (
            label_replace(container_memory_usage_bytes{container!="POD", container!="",pod!=""}, "pod", "$1", "pod", "(.*)")
            *
            on(pod, namespace)
            group_left(label_serving_knative_dev_service, label_serving_knative_dev_revision)
            kube_pod_labels{label_serving_knative_dev_service!=""}
          )

6.6.2.3. Applying data source reports for Knative Serving metering

You can apply data source reports by using the following command:

$ oc apply -f <data_source_report_name>.yaml

Example command

$ oc apply -f knative-service-memory-usage.yaml

6.6.3. Queries for Knative Serving metering

The following ReportQuery resources reference the example ReportDataSource resources provided:

Query for CPU usage in Knative Serving

apiVersion: metering.openshift.io/v1
kind: ReportQuery
metadata:
  name: knative-service-cpu-usage
spec:
  inputs:
  - name: ReportingStart
    type: time
  - name: ReportingEnd
    type: time
  - default: knative-service-cpu-usage
    name: KnativeServiceCpuUsageDataSource
    type: ReportDataSource
  columns:
  - name: period_start
    type: timestamp
    unit: date
  - name: period_end
    type: timestamp
    unit: date
  - name: namespace
    type: varchar
    unit: kubernetes_namespace
  - name: service
    type: varchar
  - name: data_start
    type: timestamp
    unit: date
  - name: data_end
    type: timestamp
    unit: date
  - name: service_cpu_seconds
    type: double
    unit: cpu_core_seconds
  query: |
    SELECT
      timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start,
      timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end,
      labels['namespace'] as project,
      labels['label_serving_knative_dev_service'] as service,
      min("timestamp") as data_start,
      max("timestamp") as data_end,
      sum(amount * "timeprecision") AS service_cpu_seconds
    FROM {| dataSourceTableName .Report.Inputs.KnativeServiceCpuUsageDataSource |}
    WHERE "timestamp" >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}'
    AND "timestamp" < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}'
    GROUP BY labels['namespace'],labels['label_serving_knative_dev_service']

Query for memory usage in Knative Serving

apiVersion: metering.openshift.io/v1
kind: ReportQuery
metadata:
  name: knative-service-memory-usage
spec:
  inputs:
  - name: ReportingStart
    type: time
  - name: ReportingEnd
    type: time
  - default: knative-service-memory-usage
    name: KnativeServiceMemoryUsageDataSource
    type: ReportDataSource
  columns:
  - name: period_start
    type: timestamp
    unit: date
  - name: period_end
    type: timestamp
    unit: date
  - name: namespace
    type: varchar
    unit: kubernetes_namespace
  - name: service
    type: varchar
  - name: data_start
    type: timestamp
    unit: date
  - name: data_end
    type: timestamp
    unit: date
  - name: service_usage_memory_byte_seconds
    type: double
    unit: byte_seconds
  query: |
    SELECT
      timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start,
      timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end,
      labels['namespace'] as project,
      labels['label_serving_knative_dev_service'] as service,
      min("timestamp") as data_start,
      max("timestamp") as data_end,
      sum(amount * "timeprecision") AS service_usage_memory_byte_seconds
    FROM {| dataSourceTableName .Report.Inputs.KnativeServiceMemoryUsageDataSource |}
    WHERE "timestamp" >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}'
    AND "timestamp" < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}'
    GROUP BY labels['namespace'],labels['label_serving_knative_dev_service']

6.6.3.1. Applying Queries for Knative Serving metering

  1. Apply the ReportQuery resource:

    $ oc apply -f <query_name>.yaml

    Example command

    $ oc apply -f knative-service-memory-usage.yaml

6.6.4. Metering reports for Knative Serving

You can run metering reports against Knative Serving by creating Report resources. Before you run a report, you must modify the input parameter within the Report resource to specify the start and end dates of the reporting period.

Example Report resource

apiVersion: metering.openshift.io/v1
kind: Report
metadata:
  name: knative-service-cpu-usage
spec:
  reportingStart: '2019-06-01T00:00:00Z' 1
  reportingEnd: '2019-06-30T23:59:59Z' 2
  query: knative-service-cpu-usage 3
runImmediately: true

1
Start date of the report, in ISO 8601 format.
2
End date of the report, in ISO 8601 format.
3
Either knative-service-cpu-usage for CPU usage report or knative-service-memory-usage for a memory usage report.

6.6.4.1. Running a metering report

  1. Run the report:

    $ oc apply -f <report_name>.yml
  2. You can then check the report:

    $ oc get report

    Example output

    NAME                        QUERY                       SCHEDULE   RUNNING    FAILED   LAST REPORT TIME       AGE
    knative-service-cpu-usage   knative-service-cpu-usage              Finished            2019-06-30T23:59:59Z   10h

6.7. High availability

High availability (HA) is a standard feature of Kubernetes APIs that helps to ensure that APIs stay operational if a disruption occurs. In an HA deployment, if an active controller crashes or is deleted, another controller is readily available. This controller takes over processing of the APIs that were being serviced by the controller that is now unavailable.

HA in OpenShift Serverless is available through leader election, which is enabled by default after the Knative Serving or Eventing control plane is installed. When using a leader election HA pattern, instances of controllers are already scheduled and running inside the cluster before they are required. These controller instances compete to use a shared resource, known as the leader election lock. The instance of the controller that has access to the leader election lock resource at any given time is called the leader.

6.7.1. Configuring high availability replicas for Knative Serving

High availability (HA) is available by default for the Knative Serving activator, autoscaler, autoscaler-hpa, controller, webhook, kourier-control, and kourier-gateway components, which are configured to have two replicas each by default. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeServing custom resource (CR).

Prerequisites

  • You have access to an OpenShift Container Platform cluster with cluster administrator permissions.
  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have logged into the web console.

Procedure

  1. In the OpenShift Container Platform web console Administrator perspective, navigate to OperatorHubInstalled Operators.
  2. Select the knative-serving namespace.
  3. Click Knative Serving in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Serving tab.
  4. Click knative-serving, then go to the YAML tab in the knative-serving page.

    Knative Serving YAML
  5. Modify the number of replicas in the KnativeServing CR:

    Example YAML

    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
      high-availability:
        replicas: 3

6.7.2. Configuring high availability replicas for Knative Eventing

High availability (HA) is available by default for the Knative Eventing eventing-controller, eventing-webhook, imc-controller, imc-dispatcher, mt-broker-controller, and sugar-controller components, which are configured to have two replicas each by default. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeEventing custom resource (CR).

Note

For Knative Eventing, the mt-broker-filter and mt-broker-ingress deployments are not scaled by HA. If multiple deployments are needed, scale these components manually.

Prerequisites

  • You have access to an OpenShift Container Platform cluster with cluster administrator permissions.
  • The OpenShift Serverless Operator and Knative Eventing are installed on your cluster.

Procedure

  1. In the OpenShift Container Platform web console Administrator perspective, navigate to OperatorHubInstalled Operators.
  2. Select the knative-eventing namespace.
  3. Click Knative Eventing in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Eventing tab.
  4. Click knative-eventing, then go to the YAML tab in the knative-eventing page.

    Knative Eventing YAML
  5. Modify the number of replicas in the KnativeEventing CR:

    Example YAML

    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
    spec:
      high-availability:
        replicas: 3

6.7.3. Configuring high availability replicas for Knative Kafka

High availability (HA) is available by default for the Knative Kafka kafka-controller and kafka-webhook-eventing components, which are configured to have two each replicas by default. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeKafka custom resource (CR).

Prerequisites

  • You have access to an OpenShift Container Platform cluster with cluster administrator permissions.
  • The OpenShift Serverless Operator and Knative Kafka are installed on your cluster.

Procedure

  1. In the OpenShift Container Platform web console Administrator perspective, navigate to OperatorHubInstalled Operators.
  2. Select the knative-eventing namespace.
  3. Click Knative Kafka in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Kafka tab.
  4. Click knative-kafka, then go to the YAML tab in the knative-kafka page.

    Knative Kafka YAML
  5. Modify the number of replicas in the KnativeKafka CR:

    Example YAML

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      name: knative-kafka
      namespace: knative-eventing
    spec:
      high-availability:
        replicas: 3