Chapter 10. Security

10.1. Configuring TLS authentication

You can use Transport Layer Security (TLS) to encrypt Knative traffic and for authentication.

TLS is the only supported method of traffic encryption for Knative Kafka. Red Hat recommends using both SASL and TLS together for Knative Kafka resources.

Note

If you want to enable internal TLS with a Red Hat OpenShift Service Mesh integration, you must enable Service Mesh with mTLS instead of the internal encryption explained in the following procedure. See the documentation for Enabling Knative Serving metrics when using Service Mesh with mTLS.

10.1.1. Enabling TLS authentication for internal traffic

OpenShift Serverless supports TLS edge termination by default, so that HTTPS traffic from end users is encrypted. However, internal traffic behind the OpenShift route is forwarded to applications by using plain data. By enabling TLS for internal traffic, the traffic sent between components is encrypted, which makes this traffic more secure.

Note

If you want to enable internal TLS with a Red Hat OpenShift Service Mesh integration, you must enable Service Mesh with mTLS instead of the internal encryption explained in the following procedure.

Important

Internal TLS encryption support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Prerequisites

  • You have installed the OpenShift Serverless Operator and Knative Serving.
  • You have installed the OpenShift (oc) CLI.

Procedure

  1. Create a Knative service that includes the internal-encryption: "true" field in the spec:

    ...
    spec:
      config:
        network:
          internal-encryption: "true"
    ...
  2. Restart the activator pods in the knative-serving namespace to load the certificates:

    $ oc delete pod -n knative-serving --selector app=activator

10.1.2. Enabling TLS authentication for cluster local services

For cluster local services, the Kourier local gateway kourier-internal is used. If you want to use TLS traffic against the Kourier local gateway, you must configure your own server certificates in the local gateway.

Prerequisites

  • You have installed the OpenShift Serverless Operator and Knative Serving.
  • You have administrator permissions.
  • You have installed the OpenShift (oc) CLI.

Procedure

  1. Deploy server certificates in the knative-serving-ingress namespace:

    $ export san="knative"
    Note

    Subject Alternative Name (SAN) validation is required so that these certificates can serve the request to <app_name>.<namespace>.svc.cluster.local.

  2. Generate a root key and certificate:

    $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
        -subj '/O=Example/CN=Example' \
        -keyout ca.key \
        -out ca.crt
  3. Generate a server key that uses SAN validation:

    $ openssl req -out tls.csr -newkey rsa:2048 -nodes -keyout tls.key \
      -subj "/CN=Example/O=Example" \
      -addext "subjectAltName = DNS:$san"
  4. Create server certificates:

    $ openssl x509 -req -extfile <(printf "subjectAltName=DNS:$san") \
      -days 365 -in tls.csr \
      -CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt
  5. Configure a secret for the Kourier local gateway:

    1. Deploy a secret in knative-serving-ingress namespace from the certificates created by the previous steps:

      $ oc create -n knative-serving-ingress secret tls server-certs \
          --key=tls.key \
          --cert=tls.crt --dry-run=client -o yaml | oc apply -f -
    2. Update the KnativeServing custom resource (CR) spec to use the secret that was created by the Kourier gateway:

      Example KnativeServing CR

      ...
      spec:
        config:
          kourier:
            cluster-cert-secret: server-certs
      ...

The Kourier controller sets the certificate without restarting the service, so that you do not need to restart the pod.

You can access the Kourier internal service with TLS through port 443 by mounting and using the ca.crt from the client.

10.1.3. Securing a service with a custom domain by using a TLS certificate

After you have configured a custom domain for a Knative service, you can use a TLS certificate to secure the mapped service. To do this, you must create a Kubernetes TLS secret, and then update the DomainMapping CR to use the TLS secret that you have created.

Prerequisites

  • You configured a custom domain for a Knative service and have a working DomainMapping CR.
  • You have a TLS certificate from your Certificate Authority provider or a self-signed certificate.
  • You have obtained the cert and key files from your Certificate Authority provider, or a self-signed certificate.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a Kubernetes TLS secret:

    $ oc create secret tls <tls_secret_name> --cert=<path_to_certificate_file> --key=<path_to_key_file>
  2. If you are using Red Hat OpenShift Service Mesh as the ingress for your OpenShift Serverless installation, label the Kubernetes TLS secret with the following:

    “networking.internal.knative.dev/certificate-uid": “<value>”

    If you are using a third-party secret provider such as cert-manager, you can configure your secret manager to label the Kubernetes TLS secret automatically. Cert-manager users can use the secret template offered to automatically generate secrets with the correct label. In this case, secret filtering is done based on the key only, but this value can carry useful information such as the certificate ID that the secret contains.

    Note

    The {cert-manager-operator} is a Technology Preview feature. For more information, see the Installing the {cert-manager-operator} documentation.

  3. Update the DomainMapping CR to use the TLS secret that you have created:

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
      name: <domain_name>
      namespace: <namespace>
    spec:
      ref:
        name: <service_name>
        kind: Service
        apiVersion: serving.knative.dev/v1
    # TLS block specifies the secret to be used
      tls:
        secretName: <tls_secret_name>

Verification

  1. Verify that the DomainMapping CR status is True, and that the URL column of the output shows the mapped domain with the scheme https:

    $ oc get domainmapping <domain_name>

    Example output

    NAME                      URL                               READY   REASON
    example.com               https://example.com               True

  2. Optional: If the service is exposed publicly, verify that it is available by running the following command:

    $ curl https://<domain_name>

    If the certificate is self-signed, skip verification by adding the -k flag to the curl command.

10.1.4. Configuring TLS authentication for Kafka brokers

Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka.

Prerequisites

  • You have cluster administrator permissions on OpenShift Container Platform.
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have a Kafka cluster CA certificate stored as a .pem file.
  • You have a Kafka cluster client certificate and a key stored as .pem files.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create the certificate files as a secret in the knative-eventing namespace:

    $ oc create secret -n knative-eventing generic <secret_name> \
      --from-literal=protocol=SSL \
      --from-file=ca.crt=caroot.pem \
      --from-file=user.crt=certificate.pem \
      --from-file=user.key=key.pem
    Important

    Use the key names ca.crt, user.crt, and user.key. Do not change them.

  2. Edit the KnativeKafka CR and add a reference to your secret in the broker spec:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      broker:
        enabled: true
        defaultConfig:
          authSecretName: <secret_name>
    ...

10.1.5. Configuring TLS authentication for Kafka channels

Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka.

Prerequisites

  • You have cluster administrator permissions on OpenShift Container Platform.
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have a Kafka cluster CA certificate stored as a .pem file.
  • You have a Kafka cluster client certificate and a key stored as .pem files.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create the certificate files as secrets in your chosen namespace:

    $ oc create secret -n <namespace> generic <kafka_auth_secret> \
      --from-file=ca.crt=caroot.pem \
      --from-file=user.crt=certificate.pem \
      --from-file=user.key=key.pem
    Important

    Use the key names ca.crt, user.crt, and user.key. Do not change them.

  2. Start editing the KnativeKafka custom resource:

    $ oc edit knativekafka
  3. Reference your secret and the namespace of the secret:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      channel:
        authSecretName: <kafka_auth_secret>
        authSecretNamespace: <kafka_auth_secret_namespace>
        bootstrapServers: <bootstrap_servers>
        enabled: true
      source:
        enabled: true
    Note

    Make sure to specify the matching port in the bootstrap server.

    For example:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      channel:
        authSecretName: tls-user
        authSecretNamespace: kafka
        bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9094
        enabled: true
      source:
        enabled: true

10.2. Configuring JSON Web Token authentication for Knative services

OpenShift Serverless does not currently have user-defined authorization features. To add user-defined authorization to your deployment, you must integrate OpenShift Serverless with Red Hat OpenShift Service Mesh, and then configure JSON Web Token (JWT) authentication and sidecar injection for Knative services.

10.2.1. Using JSON Web Token authentication with Service Mesh 2.x and OpenShift Serverless

You can use JSON Web Token (JWT) authentication with Knative services by using Service Mesh 2.x and OpenShift Serverless. To do this, you must create authentication requests and policies in the application namespace that is a member of the ServiceMeshMemberRoll object. You must also enable sidecar injection for the service.

Important

Adding sidecar injection to pods in system namespaces, such as knative-serving and knative-serving-ingress, is not supported when Kourier is enabled.

If you require sidecar injection for pods in these namespaces, see the OpenShift Serverless documentation on Integrating Service Mesh with OpenShift Serverless natively.

Prerequisites

  • You have installed the OpenShift Serverless Operator, Knative Serving, and Red Hat OpenShift Service Mesh on your cluster.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. Add the sidecar.istio.io/inject="true" annotation to your service:

    Example service

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: <service_name>
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true" 1
            sidecar.istio.io/rewriteAppHTTPProbers: "true" 2
    ...

    1
    Add the sidecar.istio.io/inject="true" annotation.
    2
    You must set the annotation sidecar.istio.io/rewriteAppHTTPProbers: "true" in your Knative service, because OpenShift Serverless versions 1.14.0 and higher use an HTTP probe as the readiness probe for Knative services by default.
  2. Apply the Service resource:

    $ oc apply -f <filename>
  3. Create a RequestAuthentication resource in each serverless application namespace that is a member in the ServiceMeshMemberRoll object:

    apiVersion: security.istio.io/v1beta1
    kind: RequestAuthentication
    metadata:
      name: jwt-example
      namespace: <namespace>
    spec:
      jwtRules:
      - issuer: testing@secure.istio.io
        jwksUri: https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json
  4. Apply the RequestAuthentication resource:

    $ oc apply -f <filename>
  5. Allow access to the RequestAuthenticaton resource from system pods for each serverless application namespace that is a member in the ServiceMeshMemberRoll object, by creating the following AuthorizationPolicy resource:

    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allowlist-by-paths
      namespace: <namespace>
    spec:
      action: ALLOW
      rules:
      - to:
        - operation:
            paths:
            - /metrics 1
            - /healthz 2
    1
    The path on your application to collect metrics by system pod.
    2
    The path on your application to probe by system pod.
  6. Apply the AuthorizationPolicy resource:

    $ oc apply -f <filename>
  7. For each serverless application namespace that is a member in the ServiceMeshMemberRoll object, create the following AuthorizationPolicy resource:

    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: require-jwt
      namespace: <namespace>
    spec:
      action: ALLOW
      rules:
      - from:
        - source:
           requestPrincipals: ["testing@secure.istio.io/testing@secure.istio.io"]
  8. Apply the AuthorizationPolicy resource:

    $ oc apply -f <filename>

Verification

  1. If you try to use a curl request to get the Knative service URL, it is denied:

    Example command

    $ curl http://hello-example-1-default.apps.mycluster.example.com/

    Example output

    RBAC: access denied

  2. Verify the request with a valid JWT.

    1. Get the valid JWT token:

      $ TOKEN=$(curl https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/demo.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode -
    2. Access the service by using the valid token in the curl request header:

      $ curl -H "Authorization: Bearer $TOKEN"  http://hello-example-1-default.apps.example.com

      The request is now allowed:

      Example output

      Hello OpenShift!

10.2.2. Using JSON Web Token authentication with Service Mesh 1.x and OpenShift Serverless

You can use JSON Web Token (JWT) authentication with Knative services by using Service Mesh 1.x and OpenShift Serverless. To do this, you must create a policy in the application namespace that is a member of the ServiceMeshMemberRoll object. You must also enable sidecar injection for the service.

Important

Adding sidecar injection to pods in system namespaces, such as knative-serving and knative-serving-ingress, is not supported when Kourier is enabled.

If you require sidecar injection for pods in these namespaces, see the OpenShift Serverless documentation on Integrating Service Mesh with OpenShift Serverless natively.

Prerequisites

  • You have installed the OpenShift Serverless Operator, Knative Serving, and Red Hat OpenShift Service Mesh on your cluster.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. Add the sidecar.istio.io/inject="true" annotation to your service:

    Example service

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: <service_name>
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true" 1
            sidecar.istio.io/rewriteAppHTTPProbers: "true" 2
    ...

    1
    Add the sidecar.istio.io/inject="true" annotation.
    2
    You must set the annotation sidecar.istio.io/rewriteAppHTTPProbers: "true" in your Knative service, because OpenShift Serverless versions 1.14.0 and higher use an HTTP probe as the readiness probe for Knative services by default.
  2. Apply the Service resource:

    $ oc apply -f <filename>
  3. Create a policy in a serverless application namespace which is a member in the ServiceMeshMemberRoll object, that only allows requests with valid JSON Web Tokens (JWT):

    Important

    The paths /metrics and /healthz must be included in excludedPaths because they are accessed from system pods in the knative-serving namespace.

    apiVersion: authentication.istio.io/v1alpha1
    kind: Policy
    metadata:
      name: default
      namespace: <namespace>
    spec:
      origins:
      - jwt:
          issuer: testing@secure.istio.io
          jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/jwks.json"
          triggerRules:
          - excludedPaths:
            - prefix: /metrics 1
            - prefix: /healthz 2
      principalBinding: USE_ORIGIN
    1
    The path on your application to collect metrics by system pod.
    2
    The path on your application to probe by system pod.
  4. Apply the Policy resource:

    $ oc apply -f <filename>

Verification

  1. If you try to use a curl request to get the Knative service URL, it is denied:

    $ curl http://hello-example-default.apps.mycluster.example.com/

    Example output

    Origin authentication failed.

  2. Verify the request with a valid JWT.

    1. Get the valid JWT token:

      $ TOKEN=$(curl https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/demo.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode -
    2. Access the service by using the valid token in the curl request header:

      $ curl http://hello-example-default.apps.mycluster.example.com/ -H "Authorization: Bearer $TOKEN"

      The request is now allowed:

      Example output

      Hello OpenShift!

10.3. Configuring a custom domain for a Knative service

Knative services are automatically assigned a default domain name based on your cluster configuration. For example, <service_name>-<namespace>.example.com. You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service.

You can do this by creating a DomainMapping resource for the service. You can also create multiple DomainMapping resources to map multiple domains and subdomains to a single service.

10.3.1. Creating a custom domain mapping

You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. To map a custom domain name to a custom resource (CR), you must create a DomainMapping CR that maps to an Addressable target CR, such as a Knative service or a Knative route.

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have created a Knative service and control a custom domain that you want to map to that service.

    Note

    Your custom domain must point to the IP address of the OpenShift Container Platform cluster.

Procedure

  1. Create a YAML file containing the DomainMapping CR in the same namespace as the target CR you want to map to:

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
     name: <domain_name> 1
     namespace: <namespace> 2
    spec:
     ref:
       name: <target_name> 3
       kind: <target_type> 4
       apiVersion: serving.knative.dev/v1
    1
    The custom domain name that you want to map to the target CR.
    2
    The namespace of both the DomainMapping CR and the target CR.
    3
    The name of the target CR to map to the custom domain.
    4
    The type of CR being mapped to the custom domain.

    Example service domain mapping

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
     name: example.com
     namespace: default
    spec:
     ref:
       name: example-service
       kind: Service
       apiVersion: serving.knative.dev/v1

    Example route domain mapping

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
     name: example.com
     namespace: default
    spec:
     ref:
       name: example-route
       kind: Route
       apiVersion: serving.knative.dev/v1

  2. Apply the DomainMapping CR as a YAML file:

    $ oc apply -f <filename>

10.3.2. Creating a custom domain mapping by using the Knative CLI

You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can use the Knative (kn) CLI to create a DomainMapping custom resource (CR) that maps to an Addressable target CR, such as a Knative service or a Knative route.

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have created a Knative service or route, and control a custom domain that you want to map to that CR.

    Note

    Your custom domain must point to the DNS of the OpenShift Container Platform cluster.

  • You have installed the Knative (kn) CLI.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  • Map a domain to a CR in the current namespace:

    $ kn domain create <domain_mapping_name> --ref <target_name>

    Example command

    $ kn domain create example.com --ref example-service

    The --ref flag specifies an Addressable target CR for domain mapping.

    If a prefix is not provided when using the --ref flag, it is assumed that the target is a Knative service in the current namespace.

  • Map a domain to a Knative service in a specified namespace:

    $ kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>

    Example command

    $ kn domain create example.com --ref ksvc:example-service:example-namespace

  • Map a domain to a Knative route:

    $ kn domain create <domain_mapping_name> --ref <kroute:route_name>

    Example command

    $ kn domain create example.com --ref kroute:example-route

10.3.3. Securing a service with a custom domain by using a TLS certificate

After you have configured a custom domain for a Knative service, you can use a TLS certificate to secure the mapped service. To do this, you must create a Kubernetes TLS secret, and then update the DomainMapping CR to use the TLS secret that you have created.

Prerequisites

  • You configured a custom domain for a Knative service and have a working DomainMapping CR.
  • You have a TLS certificate from your Certificate Authority provider or a self-signed certificate.
  • You have obtained the cert and key files from your Certificate Authority provider, or a self-signed certificate.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a Kubernetes TLS secret:

    $ oc create secret tls <tls_secret_name> --cert=<path_to_certificate_file> --key=<path_to_key_file>
  2. If you are using Red Hat OpenShift Service Mesh as the ingress for your OpenShift Serverless installation, label the Kubernetes TLS secret with the following:

    “networking.internal.knative.dev/certificate-uid": “<value>”

    If you are using a third-party secret provider such as cert-manager, you can configure your secret manager to label the Kubernetes TLS secret automatically. Cert-manager users can use the secret template offered to automatically generate secrets with the correct label. In this case, secret filtering is done based on the key only, but this value can carry useful information such as the certificate ID that the secret contains.

    Note

    The {cert-manager-operator} is a Technology Preview feature. For more information, see the Installing the {cert-manager-operator} documentation.

  3. Update the DomainMapping CR to use the TLS secret that you have created:

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
      name: <domain_name>
      namespace: <namespace>
    spec:
      ref:
        name: <service_name>
        kind: Service
        apiVersion: serving.knative.dev/v1
    # TLS block specifies the secret to be used
      tls:
        secretName: <tls_secret_name>

Verification

  1. Verify that the DomainMapping CR status is True, and that the URL column of the output shows the mapped domain with the scheme https:

    $ oc get domainmapping <domain_name>

    Example output

    NAME                      URL                               READY   REASON
    example.com               https://example.com               True

  2. Optional: If the service is exposed publicly, verify that it is available by running the following command:

    $ curl https://<domain_name>

    If the certificate is self-signed, skip verification by adding the -k flag to the curl command.