Chapter 9. Knative Eventing

9.1. Understanding Knative Eventing

Knative Eventing on OpenShift Container Platform enables developers to use an event-driven architecture with serverless applications.

9.1.1. Event-driven architecture

An event-driven architecture is based on the concept of decoupled relationships between event producers that create events, and event sinks, or consumers, that receive events.

Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications, which enables creating, parsing, sending, and receiving events in any programming language.

9.1.2. Knative Eventing use cases

Knative Eventing supports the following use cases:

Publish an event without creating a consumer
You can send events to a broker as an HTTP POST, and use binding to decouple the destination configuration from your application that produces events.
Consume an event without creating a publisher
You can use a trigger to consume events from a broker based on event attributes. The application receives events as an HTTP POST.

9.1.3. Knative Eventing custom resources (CRs)

To enable delivery to multiple types of sinks, Knative Eventing defines the following generic interfaces that can be implemented by multiple Kubernetes resources:

Addressable resources
Able to receive and acknowledge an event delivered over HTTP to an address defined in the status.address.url field of the event. The Kubernetes Service resource also satisfies the addressable interface.
Callable resources
Able to receive an event delivered over HTTP and transform it, returning 0 or 1 new events in the HTTP response payload. These returned events may be further processed in the same way that events from an external event source are processed.

You can propagate an event from an event source to multiple event sinks by using:

9.2. Event sinks

A sink is an Addressable custom resource (CR) that can receive incoming events from other resources. Knative services, channels, and brokers are all examples of sinks.

Tip

You can configure which CRs can be used with the --sink flag for kn CLI commands by Customizing kn.

9.2.1. Knative CLI --sink flag

When you create an event-producing custom resource by using the Knative (kn) CLI, you can specify a sink where events are sent to from that resource, by using the --sink flag.

The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local, as the sink:

Example command using the --sink flag

$ kn source binding create bind-heartbeat \
  --namespace sinkbinding-example \
  --subject "Job:batch/v1:app=heartbeat-cron" \
  --sink http://event-display.svc.cluster.local \ 1
  --ce-override "sink=bound"

1
svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel, and broker.

9.2.2. Connect an event source to a sink using the Developer perspective

You can create multiple event source types in OpenShift Container Platform that can be connected to sinks.

Prerequisites

To connect an event source to a sink using the Developer perspective, ensure that:

  • The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have created a sink.
  • You have logged in to the web console and are in the Developer perspective.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. Create an event source of any type, by navigating to +AddEvent Sources and then selecting the event source type that you want to create.
  2. In the Sink section of the Event Sources form view, select Resource. Then use the drop-down list to select your sink.
  3. Click Create.

Verification

You can verify that the event source was created and is connected to the sink by viewing the Topology page.

  1. In the Developer perspective, navigate to Topology.
  2. View the event source and click on the connected sink to see the sink details in the side panel.

9.2.3. Connecting a trigger to a sink

You can connect a trigger to a sink, so that events from a broker are filtered before they are sent to the sink. A sink that is connected to a trigger is configured as a subscriber in the Trigger resource spec.

Example of a trigger connected to a Kafka sink

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: <trigger_name> 1
spec:
...
  subscriber:
    ref:
      apiVersion: eventing.knative.dev/v1alpha1
      kind: KafkaSink
      name: <kafka_sink_name> 2

1
The name of the trigger being connected to the sink.
2
The name of a KafkaSink object.

9.3. Brokers

Brokers can be used in combination with triggers to deliver events from an event source to an event sink.

Broker event delivery overview

Events can be sent from an event source to a broker as an HTTP POST request.

After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink.

9.3.1. Creating a broker

OpenShift Serverless provides a default Knative broker that you can create by using the kn CLI. You can also create the default broker by adding the eventing.knative.dev/injection: enabled annotation to a trigger, or by adding the eventing.knative.dev/injection=enabled label to a namespace.

9.3.1.1. Creating a broker by using the Knative CLI

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the kn CLI.

Procedure

  • Create the default broker:

    $ kn broker create default

Verification

  1. Use the kn command to list all existing brokers:

    $ kn broker list

    Example output

    NAME      URL                                                                     AGE   CONDITIONS   READY   REASON
    default   http://broker-ingress.knative-eventing.svc.cluster.local/test/default   45s   5 OK / 5     True

  2. Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists:

    View the broker in the web console Topology view

9.3.1.2. Creating a broker by annotating a trigger

You can create a broker by adding the eventing.knative.dev/injection: enabled annotation to a Trigger object.

Important

If you create a broker by using the eventing.knative.dev/injection: enabled annotation, you cannot delete this broker without cluster administrator permissions. If you delete the broker without having a cluster administrator remove this annotation first, the broker is created again after deletion.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the oc CLI.

Procedure

  1. Create a Trigger object as a YAML file that has the eventing.knative.dev/injection: enabled annotation:

    apiVersion: eventing.knative.dev/v1
    kind: Trigger
    metadata:
      annotations:
        eventing.knative.dev/injection: enabled
      name: <trigger_name>
    spec:
      broker: default
      subscriber: 1
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: <service_name>
    1
    Specify details about the event sink, or subscriber, that the trigger sends events to.
  2. Apply the Trigger YAML file:

    $ oc apply -f <filename>

Verification

You can verify that the broker has been created successfully by using the oc CLI, or by observing it in the Topology view in the web console.

  1. Enter the following oc command to get the broker:

    $ oc -n <namespace> get broker default

    Example output

    NAME      READY     REASON    URL                                                                     AGE
    default   True                http://broker-ingress.knative-eventing.svc.cluster.local/test/default   3m56s

  2. Navigate to the Topology view in the web console, and observe that the broker exists:

    View the broker in the web console Topology view

9.3.1.3. Creating a broker by labeling a namespace

You can create the default broker automatically by labeling a namespace that you own or have write permissions for.

Note

Brokers created using this method will not be removed if you remove the label. You must manually delete them.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.

Procedure

  • Label a namespace with eventing.knative.dev/injection=enabled:

    $ oc label namespace <namespace> eventing.knative.dev/injection=enabled

Verification

You can verify that the broker has been created successfully by using the oc CLI, or by observing it in the Topology view in the web console.

  1. Use the oc command to get the broker:

    $ oc -n <namespace> get broker <broker_name>

    Example command

    $ oc -n default get broker default

    Example output

    NAME      READY     REASON    URL                                                                     AGE
    default   True                http://broker-ingress.knative-eventing.svc.cluster.local/test/default   3m56s

  2. Navigate to the Topology view in the web console, and observe that the broker exists:

    View the broker in the web console Topology view

9.3.1.4. Deleting a broker that was created by injection

Brokers created by injection, by using a namespace label or trigger annotation, are not deleted permanently if a developer removes the label or annotation. You must manually delete these brokers.

Procedure

  1. Remove the eventing.knative.dev/injection=enabled label from the namespace:

    $ oc label namespace <namespace> eventing.knative.dev/injection-

    Removing the annotation prevents Knative from recreating the broker after you delete it.

  2. Delete the broker from the selected namespace:

    $ oc -n <namespace> delete broker <broker_name>

Verification

  • Use the oc command to get the broker:

    $ oc -n <namespace> get broker <broker_name>

    Example command

    $ oc -n default get broker default

    Example output

    No resources found.
    Error from server (NotFound): brokers.eventing.knative.dev "default" not found

9.3.2. Managing brokers

The kn CLI provides commands that can be used to list, describe, update, and delete brokers.

9.3.2.1. Listing existing brokers by using the Knative CLI

Prerequisites

  • The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the kn CLI.

Procedure

  • List all existing brokers:

    $ kn broker list

    Example output

    NAME      URL                                                                     AGE   CONDITIONS   READY   REASON
    default   http://broker-ingress.knative-eventing.svc.cluster.local/test/default   45s   5 OK / 5     True

9.3.2.2. Describing an existing broker by using the Knative CLI

Prerequisites

  • The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the kn CLI.

Procedure

  • Describe an existing broker:

    $ kn broker describe <broker_name>

    Example command using default broker

    $ kn broker describe default

    Example output

    Name:         default
    Namespace:    default
    Annotations:  eventing.knative.dev/broker.class=MTChannelBasedBroker, eventing.knative.dev/creato ...
    Age:          22s
    
    Address:
      URL:    http://broker-ingress.knative-eventing.svc.cluster.local/default/default
    
    Conditions:
      OK TYPE                   AGE REASON
      ++ Ready                  22s
      ++ Addressable            22s
      ++ FilterReady            22s
      ++ IngressReady           22s
      ++ TriggerChannelReady    22s

9.4. Filtering events from a broker by using triggers

Using triggers enables you to filter events from the broker for delivery to event sinks.

9.4.1. Prerequisites

  • You have installed Knative Eventing and the kn CLI.
  • You have access to an available broker.
  • You have access to an available event consumer, such as a Knative service.

9.4.2. Creating a trigger using the Developer perspective

After you have created a broker, you can create a trigger in the web console Developer perspective.

Prerequisites

  • The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have logged in to the web console.
  • You are in the Developer perspective.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have created a broker and a Knative service or other event sink to connect to the trigger.

Procedure

  1. In the Developer perspective, navigate to the Topology page.
  2. Hover over the broker that you want to create a trigger for, and drag the arrow. The Add Trigger option is displayed.

    Create a trigger for the broker
  3. Click Add Trigger.
  4. Select your sink as a Subscriber from the drop-down list.
  5. Click Add.

Verification

  • After the subscription has been created, it is represented as a line that connects the broker to the service in the Topology view:

    Trigger in the Topology view

9.4.3. Deleting a trigger using the Developer perspective

You can delete triggers in the web console Developer perspective.

Prerequisites

  • To delete a trigger using the Developer perspective, ensure that you have logged in to the web console.

Procedure

  1. In the Developer perspective, navigate to the Topology page.
  2. Click on the trigger that you want to delete.
  3. In the Actions context menu, select Delete Trigger.

    Delete a trigger

9.4.4. Creating a trigger by using the Knative CLI

You can create a trigger by using the kn trigger create command.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the kn CLI.

Procedure

  • Create a trigger:

    $ kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name>

    Alternatively, you can create a trigger and simultaneously create the default broker using broker injection:

    $ kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name>

    By default, triggers forward all events sent to a broker to sinks that are subscribed to that broker. Using the --filter attribute for triggers allows you to filter events from a broker, so that subscribers will only receive a subset of events based on your defined criteria.

9.4.5. Listing triggers by using the Knative CLI

The kn trigger list command prints a list of available triggers.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the kn CLI.

Procedure

  1. Print a list of available triggers:

    $ kn trigger list

    Example output

    NAME    BROKER    SINK           AGE   CONDITIONS   READY   REASON
    email   default   ksvc:edisplay   4s    5 OK / 5     True
    ping    default   ksvc:edisplay   32s   5 OK / 5     True

  2. Optional: Print a list of triggers in JSON format:

    $ kn trigger list -o json

9.4.6. Describing a trigger by using the Knative CLI

You can use the kn trigger describe command to print information about a trigger.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the kn CLI.

Procedure

  • Enter the command:

    $ kn trigger describe <trigger_name>

    Example output

    Name:         ping
    Namespace:    default
    Labels:       eventing.knative.dev/broker=default
    Annotations:  eventing.knative.dev/creator=kube:admin, eventing.knative.dev/lastModifier=kube:admin
    Age:          2m
    Broker:       default
    Filter:
      type:       dev.knative.event
    
    Sink:
      Name:       edisplay
      Namespace:  default
      Resource:   Service (serving.knative.dev/v1)
    
    Conditions:
      OK TYPE                  AGE REASON
      ++ Ready                  2m
      ++ BrokerReady            2m
      ++ DependencyReady        2m
      ++ Subscribed             2m
      ++ SubscriberResolved     2m

9.4.7. Filtering events with triggers by using the Knative CLI

In the following trigger example, only events with the attribute type: dev.knative.samples.helloworld will reach the event sink.

$ kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name>

You can also filter events using multiple attributes. The following example shows how to filter events using the type, source, and extension attributes.

$ kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> \
--filter type=dev.knative.samples.helloworld \
--filter source=dev.knative.samples/helloworldsource \
--filter myextension=my-extension-value

9.4.8. Updating a trigger by using the Knative CLI

You can use the kn trigger update command with certain flags to update attributes for a trigger.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the kn CLI.

Procedure

  • Update a trigger:

    $ kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags]
    • You can update a trigger to filter exact event attributes that match incoming events. For example, using the type attribute:

      $ kn trigger update <trigger_name> --filter type=knative.dev.event
    • You can remove a filter attribute from a trigger. For example, you can remove the filter attribute with key type:

      $ kn trigger update <trigger_name> --filter type-
    • You can use the --sink parameter to change the event sink of a trigger:

      $ kn trigger update <trigger_name> --sink ksvc:my-event-sink

9.4.9. Deleting a trigger by using the Knative CLI

You can use the kn trigger delete command to delete a trigger.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the kn CLI.

Procedure

  • Delete a trigger:

    $ kn trigger delete <trigger_name>

Verification

  1. List existing triggers:

    $ kn trigger list
  2. Verify that the trigger no longer exists:

    Example output

    No triggers found.

9.5. Event delivery

You can configure event delivery parameters for Knative Eventing that are applied in cases where an event fails to be delivered by a subscription. Event delivery parameters are configured individually per subscription.

9.5.1. Event delivery behavior for Knative Eventing channels

Different Knative Eventing channel types have their own behavior patterns that are followed for event delivery. Developers can set event delivery parameters in the subscription configuration to ensure that any events that fail to be delivered from channels to an event sink are retried. You must also configure a dead letter sink for subscriptions if you want to provide a sink where events that are not eventually delivered can be stored, otherwise undelivered events are dropped.

9.5.1.1. Event delivery behavior for Knative Kafka channels

If an event is successfully delivered to a Kafka channel or broker receiver, the receiver responds with a 202 status code, which means that the event has been safely stored inside a Kafka topic and is not lost. If the receiver responds with any other status code, the event is not safely stored, and steps must be taken by the user to resolve this issue.

9.5.1.2. Delivery failure status codes

The channel or broker receiver can respond with the following status codes if an event fails to be delivered:

500
This is a generic status code which means that the event was not delivered successfully.
404
This status code means that the channel or broker the event is being delivered to does not exist, or that the Host header is incorrect.
400
This status code means that the event being sent to the receiver is invalid.

9.5.2. Configurable parameters

The following parameters can be configured for event delivery.

Dead letter sink
You can configure the deadLetterSink delivery parameter so that if an event fails to be delivered it is sent to the specified event sink.
Retries
You can set a minimum number of times that the delivery must be retried before the event is sent to the dead letter sink, by configuring the retry delivery parameter with an integer value.
Back off delay
You can set the backoffDelay delivery parameter to specify the time delay before an event delivery retry is attempted after a failure. The duration of the backoffDelay parameter is specified using the ISO 8601 format.
Back off policy
The backoffPolicy delivery parameter can be used to specify the retry back off policy. The policy can be specified as either linear or exponential. When using the linear back off policy, the back off delay is the time interval specified between retries. When using the exponential backoff policy, the back off delay is equal to backoffDelay*2^<numberOfRetries>.

9.5.3. Configuring event delivery failure parameters using subscriptions

Developers can configure event delivery parameters for individual subscriptions by modifying the delivery settings for a Subscription object.

Example subscription YAML

apiVersion: messaging.knative.dev/v1
kind: Subscription
metadata:
  name: <subscription_name>
  namespace: <subscription_namespace>
spec:
  delivery:
    deadLetterSink: 1
      ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: <sink_name>
    backoffDelay: <duration> 2
    backoffPolicy: <policy_type> 3
    retry: <integer> 4

1
Configuration settings to enable using a dead letter sink. This tells the subscription what happens to events that cannot be delivered to the subscriber.

When this is configured, events that fail to be delivered are sent to the dead letter sink destination. The destination can be a Knative service or a URI.

2
You can set the backoffDelay delivery parameter to specify the time delay before an event delivery retry is attempted after a failure. The duration of the backoffDelay parameter is specified using the ISO 8601 format. For example, PT1S specifies a 1 second delay.
3
The backoffPolicy delivery parameter can be used to specify the retry back off policy. The policy can be specified as either linear or exponential. When using the linear back off policy, the back off delay is the time interval specified between retries. When using the exponential back off policy, the back off delay is equal to backoffDelay*2^<numberOfRetries>.
4
The number of times that event delivery is retried before the event is sent to the dead letter sink.

9.5.4. Additional resources

9.6. Knative Kafka

You can use the KafkaChannel channel type and KafkaSource event source with OpenShift Serverless. To do this, you must install the Knative Kafka components, and configure the integration between OpenShift Serverless and a supported Red Hat AMQ Streams cluster.

Note

Knative Kafka is not currently supported for IBM Z and IBM Power Systems.

9.6.1. Event delivery and retries

Using Kafka components in your event-driven architecture provides "at least once" guarantees for event delivery. This means that operations are retried until a return code value is received. However, while this makes your application more resilient to lost events, it might result in duplicate events being sent.

For the Kafka event source, there is a fixed number of retries for event delivery by default. For Kafka channels, retries are only performed if they are configured in the Kafka channel Delivery spec.

9.6.2. Installing Knative Kafka

The OpenShift Serverless Operator provides the Knative Kafka API that can be used to create a KnativeKafka custom resource:

Example KnativeKafka custom resource

apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
    name: knative-kafka
    namespace: knative-eventing
spec:
    channel:
        enabled: true 1
        bootstrapServers: <bootstrap_servers> 2
    source:
        enabled: true 3

1
Enables developers to use the KafkaChannel channel type in the cluster.
2
A comma-separated list of bootstrap servers from your AMQ Streams cluster.
3
Enables developers to use the KafkaSource event source type in the cluster.

9.6.2.1. Installing Knative Kafka components by using the web console

Cluster administrators can enable the use of Knative Kafka functionality in an OpenShift Serverless deployment by instantiating the KnativeKafka custom resource definition provided by the Knative Kafka OpenShift Serverless Operator API.

Prerequisites

  • You have installed OpenShift Serverless, including Knative Eventing, in your OpenShift Container Platform cluster.
  • You have access to a Red Hat AMQ Streams cluster.
  • You have cluster administrator permissions on OpenShift Container Platform.
  • You are logged in to the web console.

Procedure

  1. In the Administrator perspective, navigate to OperatorsInstalled Operators.
  2. Check that the Project dropdown at the top of the page is set to Project: knative-eventing.
  3. In the list of Provided APIs for the OpenShift Serverless Operator, find the Knative Kafka box and click Create Instance.
  4. Configure the KnativeKafka object in the Create Knative Kafka page.

    Important

    To use the Kafka channel or Kafka source on your cluster, you must toggle the Enable switch for the options you want to use to true. These switches are set to false by default. Additionally, to use the Kafka channel, you must specify the Boostrap Servers.

    1. Using the form is recommended for simpler configurations that do not require full control of KnativeKafka object creation.
    2. Editing the YAML is recommended for more complex configurations that require full control of KnativeKafka object creation. You can access the YAML by clicking the Edit YAML link in the top right of the Create Knative Kafka page.
  5. Click Create after you have completed any of the optional configurations for Kafka. You are automatically directed to the Knative Kafka tab where knative-kafka is in the list of resources.

Verification

  1. Click on the knative-kafka resource in the Knative Kafka tab. You are automatically directed to the Knative Kafka Overview page.
  2. View the list of Conditions for the resource and confirm that they have a status of True.

    Kafka Knative Overview page showing Conditions

    If the conditions have a status of Unknown or False, wait a few moments to refresh the page.

  3. Check that the Knative Kafka resources have been created:

    $ oc get pods -n knative-eventing

    Example output

    NAME                                            READY   STATUS    RESTARTS   AGE
    kafka-ch-controller-85f879d577-xcbjh            1/1     Running   0          44s
    kafka-ch-dispatcher-55d76d7db8-ggqjl            1/1     Running   0          44s
    kafka-controller-manager-bc994c465-pt7qd        1/1     Running   0          40s
    kafka-webhook-54646f474f-wr7bb                  1/1     Running   0          42s

9.6.5. Configuring authentication for Kafka

In production, Kafka clusters are often secured using the TLS or SASL authentication methods. This section shows how to configure the Kafka channel to work against a protected Red Hat AMQ Streams (Kafka) cluster using TLS or SASL.

Note

If you choose to enable SASL, Red Hat recommends to also enable TLS.

9.6.5.1. Configuring TLS authentication

Prerequisites

  • A Kafka cluster CA certificate as a .pem file.
  • A Kafka cluster client certificate and key as .pem files.

Procedure

  1. Create the certificate files as secrets in your chosen namespace:

    $ kubectl create secret -n <namespace> generic <kafka_auth_secret> \
      --from-file=ca.crt=caroot.pem \
      --from-file=user.crt=certificate.pem \
      --from-file=user.key=key.pem
    Important

    Use the key names ca.crt, user.crt, and user.key. Do not change them.

  2. Start editing the KnativeKafka custom resource:

    $ oc edit knativekafka
  3. Reference your secret and the namespace of the secret:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      channel:
        authSecretName: <kafka_auth_secret>
        authSecretNamespace: <kafka_auth_secret_namespace>
        bootstrapServers: <bootstrap_servers>
        enabled: true
      source:
        enabled: true
    Note

    Make sure to specify the matching port in the bootstrap server.

    For example:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      channel:
        authSecretName: tls-user
        authSecretNamespace: kafka
        bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9094
        enabled: true
      source:
        enabled: true

Additional resources

9.6.5.2. Configuring SASL authentication

Prerequisites

  • A username and password for the Kafka cluster.
  • Choose the SASL mechanism to use, for example PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.
  • If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster.
Note

Red Hat recommends to enable TLS in addition to SASL.

Procedure

  1. Create the certificate files as secrets in your chosen namespace:

    $ oc create secret --namespace <namespace> generic <kafka_auth_secret> \
      --from-file=ca.crt=caroot.pem \
      --from-literal=password="SecretPassword" \
      --from-literal=saslType="SCRAM-SHA-512" \
      --from-literal=user="my-sasl-user"
    Important

    Use the key names ca.crt, password, and saslType. Do not change them.

  2. Start editing the KnativeKafka custom resource:

    $ oc edit knativekafka
  3. Reference your secret and the namespace of the secret:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      channel:
        authSecretName: <kafka_auth_secret>
        authSecretNamespace: <kafka_auth_secret_namespace>
        bootstrapServers: <bootstrap_servers>
        enabled: true
      source:
        enabled: true
    Note

    Make sure to specify the matching port in the bootstrap server.

    For example:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      channel:
        authSecretName: scram-user
        authSecretNamespace: kafka
        bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9093
        enabled: true
      source:
        enabled: true

Additional resources

9.6.5.3. Configuring SASL authentication using public CA certificates

If you want to use SASL with public CA certificates, you must use the tls.enabled=true flag, rather than the ca.crt argument, when creating the secret. For example:

$ oc create secret --namespace <namespace> generic <kafka_auth_secret> \
  --from-literal=tls.enabled=true \
  --from-literal=password="SecretPassword" \
  --from-literal=saslType="SCRAM-SHA-512" \
  --from-literal=user="my-sasl-user"

Additional resources