Chapter 14. CLI tools

14.1. Installing the Knative CLI

The Knative CLI (kn) does not have its own login mechanism. To log in to the cluster, you must install the oc CLI and use the oc login command.

Installation options for the oc CLI will vary depending on your operating system.

For more information on installing the oc CLI for your operating system and logging in with oc, see the OpenShift CLI getting started documentation.

Important

If you try to use an older version of the Knative kn CLI with a newer OpenShift Serverless release, the API is not found and an error occurs.

For example, if you use the 1.16.0 release of the kn CLI, which uses version 0.22.0, with the 1.17.0 OpenShift Serverless release, which uses the 0.23.0 versions of the Knative Serving and Knative Eventing APIs, the CLI does not work because it continues to look for the outdated 0.22.0 API versions.

Ensure that you are using the latest kn CLI version for your OpenShift Serverless release to avoid issues.

14.1.1. Installing the Knative CLI using the OpenShift Container Platform web console

Once the OpenShift Serverless Operator is installed, you will see a link to download the Knative CLI (kn) for Linux (x86_64, amd64, s390x, ppc64le), macOS, or Windows from the Command Line Tools page in the OpenShift Container Platform web console.

You can access the Command Line Tools page by clicking the question circle icon in the top right corner of the web console and selecting Command Line Tools in the drop down menu.

Procedure

  1. Download the kn CLI from the Command Line Tools page.
  2. Unpack the archive:

    $ tar -xf <file>
  3. Move the kn binary to a directory on your PATH.
  4. To check your PATH, run:

    $ echo $PATH
    Note

    If you do not use RHEL or Fedora, ensure that libc is installed in a directory on your library path. If libc is not available, you might see the following error when you run CLI commands:

    $ kn: No such file or directory

14.1.2. Installing the Knative CLI for Linux using an RPM

For Red Hat Enterprise Linux (RHEL), you can install the Knative CLI (kn) as an RPM if you have an active OpenShift Container Platform subscription on your Red Hat account.

Procedure

  1. Enter the command:

    # subscription-manager register
  2. Enter the command:

    # subscription-manager refresh
  3. Enter the command:

    # subscription-manager attach --pool=<pool_id> 1
    1
    Pool ID for an active OpenShift Container Platform subscription
  4. Enter the command:

    # subscription-manager repos --enable="openshift-serverless-1-for-rhel-8-x86_64-rpms"
  5. Enter the command:

    # yum install openshift-serverless-clients

14.1.3. Installing the Knative CLI for Linux

For Linux distributions, you can download the Knative CLI (kn) directly as a tar.gz archive.

Procedure

  1. Download the kn CLI.
  2. Unpack the archive:

    $ tar -xf <file>
  3. Move the kn binary to a directory on your PATH.
  4. To check your PATH, run:

    $ echo $PATH
    Note

    If you do not use RHEL or Fedora, ensure that libc is installed in a directory on your library path. If libc is not available, you might see the following error when you run CLI commands:

    $ kn: No such file or directory

14.1.4. Installing the Knative CLI for Linux on IBM Power Systems using an RPM

For Red Hat Enterprise Linux (RHEL), you can install the Knative CLI (kn) as an RPM if you have an active OpenShift Container Platform subscription on your Red Hat account.

Procedure

  1. Register with a Red Hat Subscription Management (RHSM) service during the firstboot process:

    # subscription-manager register
  2. Refresh the RHSM:

    # subscription-manager refresh
  3. Attach the subscription to a system by specifying ID of the subscription pool, using the --pool option:

    # subscription-manager attach --pool=<pool_id> 1
    1
    Pool ID for an active OpenShift Container Platform subscription
  4. Enable the repository using Red Hat Subscription Manager:

    # subscription-manager repos --enable="openshift-serverless-1-for-rhel-8-ppc64le-rpms"
  5. Install the openshift-serverless-clients on the system:

    # yum install openshift-serverless-clients

14.1.5. Installing the Knative CLI for Linux on IBM Power Systems

For Linux distributions, you can download the Knative CLI (kn) directly as a tar.gz archive.

Procedure

  1. Download the kn CLI.
  2. Unpack the archive:

    $ tar -xf <file>
  3. Move the kn binary to a directory on your PATH.
  4. To check your PATH, run:

    $ echo $PATH
    Note

    If you do not use RHEL, ensure that libc is installed in a directory on your library path.

    If libc is not available, you might see the following error when you run CLI commands:

    $ kn: No such file or directory

14.1.6. Installing the Knative CLI for Linux on IBM Z and LinuxONE using an RPM

For Red Hat Enterprise Linux (RHEL), you can install the Knative CLI (kn) as an RPM if you have an active OpenShift Container Platform subscription on your Red Hat account.

Procedure

  1. Register with a Red Hat Subscription Management (RHSM) service during the firstboot process:

    # subscription-manager register
  2. Refresh the RHSM:

    # subscription-manager refresh
  3. Attach the subscription to a system by specifying ID of the subscription pool, using the --pool option:

    # subscription-manager attach --pool=<pool_id> 1
    1
    Pool ID for an active OpenShift Container Platform subscription
  4. Enable the repository using Red Hat Subscription Manager:

    # subscription-manager repos --enable="openshift-serverless-1-for-rhel-8-s390x-rpms"
  5. Install the openshift-serverless-clients on the system:

    # yum install openshift-serverless-clients

14.1.7. Installing the Knative CLI for Linux on IBM Z and LinuxONE

For Linux distributions, you can download the Knative CLI (kn) directly as a tar.gz archive.

Procedure

  1. Download the kn CLI.
  2. Unpack the archive:

    $ tar -xf <file>
  3. Move the kn binary to a directory on your PATH.
  4. To check your PATH, run:

    $ echo $PATH
    Note

    If you do not use RHEL, ensure that libc is installed in a directory on your library path.

    If libc is not available, you might see the following error when you run CLI commands:

    $ kn: No such file or directory

14.1.8. Installing the Knative CLI for macOS

The Knative CLI (kn) for macOS is provided as a tar.gz archive.

Procedure

  1. Download the kn CLI.
  2. Unpack and unzip the archive.
  3. Move the kn binary to a directory on your PATH.
  4. To check your PATH, open a terminal window and run:

    $ echo $PATH

14.1.9. Installing the Knative CLI for Windows

The Knative CLI (kn) for Windows is provided as a zip archive.

Procedure

  1. Download the kn CLI.
  2. Extract the archive with a ZIP program.
  3. Move the kn binary to a directory on your PATH.
  4. To check your PATH, open the command prompt and run the command:

    C:\> path

14.1.10. Customizing the Knative CLI

You can customize your kn CLI setup by creating a config.yaml configuration file. You can provide this configuration by using the --config flag, otherwise the configuration is picked up from a default location. The default configuration location conforms to the XDG Base Directory Specification, and is different for Unix systems and Windows systems.

For Unix systems:

  • If the XDG_CONFIG_HOME environment variable is set, the default configuration location that the kn CLI looks for is $XDG_CONFIG_HOME/kn.
  • If the XDG_CONFIG_HOME environment variable is not set, the kn CLI looks for the configuration in the home directory of the user at $HOME/.config/kn/config.yaml.

For Windows systems, the default kn CLI configuration location is %APPDATA%\kn.

Example configuration file

plugins:
  path-lookup: true 1
  directory: ~/.config/kn/plugins 2
eventing:
  sink-mappings: 3
  - prefix: svc 4
    group: core 5
    version: v1 6
    resource: services 7

1
Specifies whether the kn CLI should look for plug-ins in the PATH environment variable. This is a boolean configuration option. The default value is false.
2
Specifies the directory where the kn CLI will look for plug-ins. The default path depends on the operating system, as described above. This can be any directory that is visible to the user.
3
The sink-mappings spec defines the Kubernetes addressable resource that is used when you use the --sink flag with a kn CLI command.
4
The prefix you want to use to describe your sink. svc for a service, channel, and broker are predefined prefixes in kn.
5
The API group of the Kubernetes resource.
6
The version of the Kubernetes resource.
7
The plural name of the Kubernetes resource type. For example, services or brokers.

14.1.11. Knative CLI plug-ins

The kn CLI supports the use of plug-ins, which enable you to extend the functionality of your kn installation by adding custom commands and other shared commands that are not part of the core distribution. kn CLI plug-ins are used in the same way as the main kn functionality.

Currently, Red Hat supports the kn-source-kafka plug-in.

14.2. Knative CLI advanced configuration

You can customize and extend the kn CLI by using advanced features, such as configuring a config.yaml file for kn or using plug-ins.

14.2.1. Customizing the Knative CLI

You can customize your kn CLI setup by creating a config.yaml configuration file. You can provide this configuration by using the --config flag, otherwise the configuration is picked up from a default location. The default configuration location conforms to the XDG Base Directory Specification, and is different for Unix systems and Windows systems.

For Unix systems:

  • If the XDG_CONFIG_HOME environment variable is set, the default configuration location that the kn CLI looks for is $XDG_CONFIG_HOME/kn.
  • If the XDG_CONFIG_HOME environment variable is not set, the kn CLI looks for the configuration in the home directory of the user at $HOME/.config/kn/config.yaml.

For Windows systems, the default kn CLI configuration location is %APPDATA%\kn.

Example configuration file

plugins:
  path-lookup: true 1
  directory: ~/.config/kn/plugins 2
eventing:
  sink-mappings: 3
  - prefix: svc 4
    group: core 5
    version: v1 6
    resource: services 7

1
Specifies whether the kn CLI should look for plug-ins in the PATH environment variable. This is a boolean configuration option. The default value is false.
2
Specifies the directory where the kn CLI will look for plug-ins. The default path depends on the operating system, as described above. This can be any directory that is visible to the user.
3
The sink-mappings spec defines the Kubernetes addressable resource that is used when you use the --sink flag with a kn CLI command.
4
The prefix you want to use to describe your sink. svc for a service, channel, and broker are predefined prefixes in kn.
5
The API group of the Kubernetes resource.
6
The version of the Kubernetes resource.
7
The plural name of the Kubernetes resource type. For example, services or brokers.

14.2.2. Knative CLI plug-ins

The kn CLI supports the use of plug-ins, which enable you to extend the functionality of your kn installation by adding custom commands and other shared commands that are not part of the core distribution. kn CLI plug-ins are used in the same way as the main kn functionality.

Currently, Red Hat supports the kn-source-kafka plug-in.

14.3. kn flags reference

14.3.1. Knative CLI --sink flag

When you create an event-producing custom resource by using the Knative (kn) CLI, you can specify a sink where events are sent to from that resource, by using the --sink flag.

The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local, as the sink:

Example command using the --sink flag

$ kn source binding create bind-heartbeat \
  --namespace sinkbinding-example \
  --subject "Job:batch/v1:app=heartbeat-cron" \
  --sink http://event-display.svc.cluster.local \ 1
  --ce-override "sink=bound"

1
svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel, and broker.

14.4. Knative Serving CLI commands

You can use the following kn CLI commands to complete Knative Serving tasks on the cluster.

14.4.1. kn service commands

You can use the following commands to create and manage Knative services.

14.4.1.1. Creating serverless applications by using the Knative CLI

The following procedure describes how you can create a basic serverless application using the kn CLI.

Prerequisites

  • OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have installed the kn CLI.

Procedure

  • Create a Knative service:

    $ kn service create <service-name> --image <image> --env <key=value>

    Example command

    $ kn service create event-display \
        --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest

    Example output

    Creating service 'event-display' in namespace 'default':
    
      0.271s The Route is still working to reflect the latest desired specification.
      0.580s Configuration "event-display" is waiting for a Revision to become ready.
      3.857s ...
      3.861s Ingress has not yet been reconciled.
      4.270s Ready to serve.
    
    Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL:
    http://event-display-default.apps-crc.testing

14.4.1.2. Updating serverless applications by using the Knative CLI

You can use the kn service update command for interactive sessions on the command line as you build up a service incrementally. In contrast to the kn service apply command, when using the kn service update command you only have to specify the changes that you want to update, rather than the full configuration for the Knative service.

Example commands

  • Update a service by adding a new environment variable:

    $ kn service update <service_name> --env <key>=<value>
  • Update a service by adding a new port:

    $ kn service update <service_name> --port 80
  • Update a service by adding new request and limit parameters:

    $ kn service update <service_name> --request cpu=500m --limit memory=1024Mi --limit cpu=1000m
  • Assign the latest tag to a revision:

    $ kn service update <service_name> --tag <revision_name>=latest
  • Update a tag from testing to staging for the latest READY revision of a service:

    $ kn service update <service_name> --untag testing --tag @latest=staging
  • Add the test tag to a revision that receives 10% of traffic, and send the rest of the traffic to the latest READY revision of a service:

    $ kn service update <service_name> --tag <revision_name>=test --traffic test=10,@latest=90

14.4.1.3. Applying service declarations

You can declaratively configure a Knative service by using the kn service apply command. If the service does not exist it is created, otherwise the existing service is updated with the options that have been changed.

The kn service apply command is especially useful for shell scripts or in a continuous integration pipeline, where users typically want to fully specify the state of the service in a single command to declare the target state.

When using kn service apply you must provide the full configuration for the Knative service. This is different from the kn service update command, which only requires you to specify in the command the options that you want to update.

Example commands

  • Create a service:

    $ kn service apply <service_name> --image <image>
  • Add an environment variable to a service:

    $ kn service apply <service_name> --image <image> --env <key>=<value>
  • Read the service declaration from a JSON or YAML file:

    $ kn service apply <service_name> -f <filename>

14.4.1.4. Describing serverless applications by using the Knative CLI

You can describe a Knative service by using the kn service describe command.

Example commands

  • Describe a service:

    $ kn service describe --verbose <service_name>

    The --verbose flag is optional but can be included to provide a more detailed description. The difference between a regular and verbose output is shown in the following examples:

    Example output without --verbose flag

    Name:       hello
    Namespace:  default
    Age:        2m
    URL:        http://hello-default.apps.ocp.example.com
    
    Revisions:
      100%  @latest (hello-00001) [1] (2m)
            Image:  docker.io/openshift/hello-openshift (pinned to aaea76)
    
    Conditions:
      OK TYPE                   AGE REASON
      ++ Ready                   1m
      ++ ConfigurationsReady     1m
      ++ RoutesReady             1m

    Example output with --verbose flag

    Name:         hello
    Namespace:    default
    Annotations:  serving.knative.dev/creator=system:admin
                  serving.knative.dev/lastModifier=system:admin
    Age:          3m
    URL:          http://hello-default.apps.ocp.example.com
    Cluster:      http://hello.default.svc.cluster.local
    
    Revisions:
      100%  @latest (hello-00001) [1] (3m)
            Image:  docker.io/openshift/hello-openshift (pinned to aaea76)
            Env:    RESPONSE=Hello Serverless!
    
    Conditions:
      OK TYPE                   AGE REASON
      ++ Ready                   3m
      ++ ConfigurationsReady     3m
      ++ RoutesReady             3m

  • Describe a service in YAML format:

    $ kn service describe <service_name> -o yaml
  • Describe a service in JSON format:

    $ kn service describe <service_name> -o json
  • Print the service URL only:

    $ kn service describe <service_name> -o url

14.4.2. kn container commands

You can use the following commands to create and manage multiple containers in a Knative service spec.

14.4.2.1. Knative client multi-container support

You can use the kn container add command to print YAML container spec to standard output. This command is useful for multi-container use cases because it can be used along with other standard kn flags to create definitions. The kn container add command accepts all container-related flags that are supported for use with the kn service create command. The kn container add command can also be chained by using UNIX pipes (|) to create multiple container definitions at once.

14.4.2.1.1. Example commands
  • Add a container from an image and print it to standard output:

    $ kn container add <container_name> --image <image_uri>

    Example command

    $ kn container add sidecar --image docker.io/example/sidecar

    Example output

    containers:
    - image: docker.io/example/sidecar
      name: sidecar
      resources: {}

  • Chain two kn container add commands together, and then pass them to a kn service create command to create a Knative service with two containers:

    $ kn container add <first_container_name> --image <image_uri> | \
    kn container add <second_container_name> --image <image_uri> | \
    kn service create <service_name> --image <image_uri> --extra-containers -

    --extra-containers - specifies a special case where kn reads the pipe input instead of a YAML file.

    Example command

    $ kn container add sidecar --image docker.io/example/sidecar:first | \
    kn container add second --image docker.io/example/sidecar:second | \
    kn service create my-service --image docker.io/example/my-app:latest --extra-containers -

    The --extra-containers flag can also accept a path to a YAML file:

    $ kn service create <service_name> --image <image_uri> --extra-containers <filename>

    Example command

    $ kn service create my-service --image docker.io/example/my-app:latest --extra-containers my-extra-containers.yaml

14.4.3. kn domain commands

You can use the following commands to create and manage domain mappings.

14.4.3.1. Creating a custom domain mapping by using the Knative CLI

You can use the kn CLI to create a DomainMapping custom resource (CR) that maps to an Addressable target CR, such as a Knative service or a Knative route.

The --ref flag specifies an Addressable target CR for domain mapping.

If a prefix is not provided when using the --ref flag, it is assumed that the target is a Knative service in the current namespace. The examples in the following procedure show the prefixes for mapping to a Knative service or a Knative route.

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have created a Knative service or route, and control a custom domain that you want to map to that CR.

    Note

    Your custom domain must point to the DNS of the OpenShift Container Platform cluster.

  • You have installed the kn CLI tool.

Procedure

  • Map a domain to a CR in the current namespace:

    $ kn domain create <domain_mapping_name> --ref <target_name>

    Example command

    $ kn domain create example.com --ref example-service

  • Map a domain to a Knative service in a specified namespace:

    $ kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>

    Example command

    $ kn domain create example.com --ref ksvc:example-service:example-namespace

  • Map a domain to a Knative route:

    $ kn domain create <domain_mapping_name> --ref <kroute:route_name>

    Example command

    $ kn domain create example.com --ref kroute:example-route

14.4.3.2. Managing custom domain mappings by using the Knative CLI

After you have created a DomainMapping custom resource (CR), you can list existing CRs, view information about an existing CR, update CRs, or delete CRs by using the kn CLI.

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have created at least one DomainMapping CR.
  • You have installed the kn CLI tool.

Procedure

  • List existing DomainMapping CRs:

    $ kn domain list -n <domain_mapping_namespace>
  • View details of an existing DomainMapping CR:

    $ kn domain describe <domain_mapping_name>
  • Update a DomainMapping CR to point to a new target:

    $ kn domain update --ref <target>
  • Delete a DomainMapping CR:

    $ kn domain delete <domain_mapping_name>

14.5. Knative Eventing CLI commands

You can use the following kn CLI commands to complete Knative Eventing tasks on the cluster.

14.5.1. kn source commands

You can use the following commands to list, create, and manage Knative event sources.

14.5.1.1. Listing available event source types by using the Knative CLI

Procedure

  1. List the available event source types in the terminal:

    $ kn source list-types

    Example output

    TYPE              NAME                                            DESCRIPTION
    ApiServerSource   apiserversources.sources.knative.dev            Watch and send Kubernetes API events to a sink
    PingSource        pingsources.sources.knative.dev                 Periodically send ping events to a sink
    SinkBinding       sinkbindings.sources.knative.dev                Binding for connecting a PodSpecable to a sink

  2. Optional: You can also list the available event source types in YAML format:

    $ kn source list-types -o yaml

14.5.1.2. Creating and managing container sources by using the Knative CLI

You can use the following kn commands to create and manage container sources:

Create a container source

$ kn source container create <container_source_name> --image <image_uri> --sink <sink>

Delete a container source

$ kn source container delete <container_source_name>

Describe a container source

$ kn source container describe <container_source_name>

List existing container sources

$ kn source container list

List existing container sources in YAML format

$ kn source container list -o yaml

Update a container source

This command updates the image URI for an existing container source:

$ kn source container update <container_source_name> --image <image_uri>

14.5.1.3. Creating an API server source by using the Knative CLI

This section describes the steps required to create an API server source using kn commands.

Prerequisites

  • You must have OpenShift Serverless, the Knative Serving and Eventing components, and the kn CLI installed.

Procedure

  1. Create an API server source that uses a broker as a sink:

    $ kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource "event:v1" --service-account <service_account_name> --mode Resource
  2. To check that the API server source is set up correctly, create a Knative service that dumps incoming messages to its log:

    $ kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
  3. Create a trigger to filter events from the default broker to the service:

    $ kn trigger create <trigger_name> --sink ksvc:<service_name>
  4. Create events by launching a pod in the default namespace:

    $ oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
  5. Check that the controller is mapped correctly by inspecting the output generated by the following command:

    $ kn source apiserver describe <source_name>

    Example output

    Name:                mysource
    Namespace:           default
    Annotations:         sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer
    Age:                 3m
    ServiceAccountName:  events-sa
    Mode:                Resource
    Sink:
      Name:       default
      Namespace:  default
      Kind:       Broker (eventing.knative.dev/v1)
    Resources:
      Kind:        event (v1)
      Controller:  false
    Conditions:
      OK TYPE                     AGE REASON
      ++ Ready                     3m
      ++ Deployed                  3m
      ++ SinkProvided              3m
      ++ SufficientPermissions     3m
      ++ EventTypesProvided        3m

Verification

You can verify that the Kubernetes events were sent to Knative by looking at the message dumper function logs.

  1. Get the pods:

    $ oc get pods
  2. View the message dumper function logs for the pods:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container

    Example output

    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.apiserver.resource.update
      datacontenttype: application/json
      ...
    Data,
      {
        "apiVersion": "v1",
        "involvedObject": {
          "apiVersion": "v1",
          "fieldPath": "spec.containers{hello-node}",
          "kind": "Pod",
          "name": "hello-node",
          "namespace": "default",
           .....
        },
        "kind": "Event",
        "message": "Started container",
        "metadata": {
          "name": "hello-node.159d7608e3a3572c",
          "namespace": "default",
          ....
        },
        "reason": "Started",
        ...
      }

14.5.1.4. Deleting the API server source by using the Knative CLI

This section describes the steps used to delete the API server source, trigger, service account, cluster role, and cluster role binding using kn and oc commands.

Prerequisites

  • You must have the kn CLI installed.

Procedure

  1. Delete the trigger:

    $ kn trigger delete <trigger_name>
  2. Delete the event source:

    $ kn source apiserver delete <source_name>
  3. Delete the service account, cluster role, and cluster binding:

    $ oc delete -f authentication.yaml

14.5.1.5. Creating a ping source by using the Knative CLI

The following procedure describes how to create a basic ping source by using the kn CLI.

Prerequisites

  • You have Knative Serving and Eventing installed.
  • You have the kn CLI installed.

Procedure

  1. To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the service logs:

    $ kn service create event-display \
        --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
  2. For each set of ping events that you want to request, create a ping source in the same namespace as the event consumer:

    $ kn source ping create test-ping-source \
        --schedule "*/2 * * * *" \
        --data '{"message": "Hello world!"}' \
        --sink ksvc:event-display
  3. Check that the controller is mapped correctly by entering the following command and inspecting the output:

    $ kn source ping describe test-ping-source

    Example output

    Name:         test-ping-source
    Namespace:    default
    Annotations:  sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer
    Age:          15s
    Schedule:     */2 * * * *
    Data:         {"message": "Hello world!"}
    
    Sink:
      Name:       event-display
      Namespace:  default
      Resource:   Service (serving.knative.dev/v1)
    
    Conditions:
      OK TYPE                 AGE REASON
      ++ Ready                 8s
      ++ Deployed              8s
      ++ SinkProvided         15s
      ++ ValidSchedule        15s
      ++ EventTypeProvided    15s
      ++ ResourcesCorrect     15s

Verification

You can verify that the Kubernetes events were sent to the Knative event sink by looking at the logs of the sink pod.

By default, Knative services terminate their pods if no traffic is received within a 60 second period. The example shown in this guide creates a ping source that sends a message every 2 minutes, so each message should be observed in a newly created pod.

  1. Watch for new pods created:

    $ watch oc get pods
  2. Cancel watching the pods using Ctrl+C, then look at the logs of the created pod:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container

    Example output

    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.sources.ping
      source: /apis/v1/namespaces/default/pingsources/test-ping-source
      id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9
      time: 2020-04-07T16:16:00.000601161Z
      datacontenttype: application/json
    Data,
      {
        "message": "Hello world!"
      }

14.5.1.6. Deleting a ping source by using the Knative CLI

The following procedure describes how to delete a ping source using the kn CLI.

  • Delete the ping source:

    $ kn delete pingsources.sources.knative.dev <ping_source_name>

14.5.1.7. Creating a Kafka event source by using the Knative CLI

This section describes how to create a Kafka event source by using the kn command.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, Knative Serving, and the KnativeKafka custom resource (CR) are installed on your cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import.

Procedure

  1. To verify that the Kafka event source is working, create a Knative service that dumps incoming events into the service logs:

    $ kn service create event-display \
        --image quay.io/openshift-knative/knative-eventing-sources-event-display
  2. Create a KafkaSource CR:

    $ kn source kafka create <kafka_source_name> \
        --servers <cluster_kafka_bootstrap>.kafka.svc:9092 \
        --topics <topic_name> --consumergroup my-consumer-group \
        --sink event-display
    Note

    Replace the placeholder values in this command with values for your source name, bootstrap servers, and topics.

    The --servers, --topics, and --consumergroup options specify the connection parameters to the Kafka cluster. The --consumergroup option is optional.

  3. Optional: View details about the KafkaSource CR you created:

    $ kn source kafka describe <kafka_source_name>

    Example output

    Name:              example-kafka-source
    Namespace:         kafka
    Age:               1h
    BootstrapServers:  example-cluster-kafka-bootstrap.kafka.svc:9092
    Topics:            example-topic
    ConsumerGroup:     example-consumer-group
    
    Sink:
      Name:       event-display
      Namespace:  default
      Resource:   Service (serving.knative.dev/v1)
    
    Conditions:
      OK TYPE            AGE REASON
      ++ Ready            1h
      ++ Deployed         1h
      ++ SinkProvided     1h

Verification steps

  1. Trigger the Kafka instance to send a message to the topic:

    $ oc -n kafka run kafka-producer \
        -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true \
        --restart=Never -- bin/kafka-console-producer.sh \
        --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic

    Enter the message in the prompt. This command assumes that:

    • The Kafka cluster is installed in the kafka namespace.
    • The KafkaSource object has been configured to use the my-topic topic.
  2. Verify that the message arrived by viewing the logs:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container

    Example output

    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.kafka.event
      source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic
      subject: partition:46#0
      id: partition:46/offset:0
      time: 2021-03-10T11:21:49.4Z
    Extensions,
      traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00
    Data,
      Hello!

14.6. kn func

14.6.1. Creating functions

You can create a basic serverless function using the kn CLI.

You can specify the path, runtime, template, and repository with the template as flags on the command line, or use the -c flag to start the interactive experience in the terminal.

Procedure

  • Create a function project:

    $ kn func create -r <repository> -l <runtime> -t <template> <path>
    • Supported runtimes include node, go, python, quarkus, and typescript.
    • Supported templates include http and events.

      Example command

      $ kn func create -l typescript -t events examplefunc

      Example output

      Project path:  /home/user/demo/examplefunc
      Function name: examplefunc
      Runtime:       typescript
      Template:      events
      Writing events to /home/user/demo/examplefunc

    • Alternatively, you can specify a repository that contains a custom template.

      Example command

      $ kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc

      Example output

      Project path:  /home/user/demo/examplefunc
      Function name: examplefunc
      Runtime:       node
      Template:      hello-world
      Writing events to /home/user/demo/examplefunc

14.6.2. Building functions

Before you can run a function, you must build the function project by using the kn func build command. The build command reads the func.yaml file from the function project directory to determine the image name and registry.

Example func.yaml

name: example-function
namespace: default
runtime: node
image: <image_from_registry>
imageDigest: ""
trigger: http
builder: default
builderMap:
  default: quay.io/boson/faas-nodejs-builder
envs: {}

If the image name and registry are not set in the func.yaml file, you must either specify the registry flag, -r when using the kn func build command, or you are prompted to provide a registry value in the terminal when building a function. An image name is then derived from the registry value that you have provided.

Example command using the -r registry flag

$ kn func build [-i <image> -r <registry> -p <path>]

Example output

Building function image
Function image has been built, image: quay.io/username/example-function:latest

This command creates an OCI container image that can be run locally on your computer, or on a Kubernetes cluster.

Example using the registy prompt

$ kn func build
A registry for function images is required (e.g. 'quay.io/boson').

Registry for function images: quay.io/username
Building function image
Function image has been built, image: quay.io/username/example-function:latest

The values for image and registry are persisted to the func.yaml file, so that subsequent invocations do not require the user to specify these again.

14.6.3. Deploying functions

You can deploy a function to your cluster as a Knative service by using the kn func deploy command.

If the targeted function is already deployed, it is updated with a new container image that is pushed to a container image registry, and the Knative service is updated.

Prerequisites

  • You must have already initialized the function that you want to deploy.

Procedure

  • Deploy a function:

    $ kn func deploy [-n <namespace> -p <path> -i <image> -r <registry>]

    Example output

    Function deployed at: http://func.example.com

    • If no namespace is specified, the function is deployed in the current namespace.
    • The function is deployed from the current directory, unless a path is specified.
    • The Knative service name is derived from the project name, and cannot be changed using this command.

14.6.4. Listing existing functions

You can list existing functions by using kn func list. If you want to list functions that have been deployed as Knative services, you can also use kn service list.

Procedure

  • List existing functions:

    $ kn func list [-n <namespace> -p <path>]

    Example output

    NAME           NAMESPACE  RUNTIME  URL                                                                                      READY
    example-function  default    node     http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com  True

  • List functions deployed as Knative services:

    $ kn service list -n <namespace>

    Example output

    NAME            URL                                                                                       LATEST                AGE   CONDITIONS   READY   REASON
    example-function   http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com   example-function-gzl4c   16m   3 OK / 3     True

14.6.5. Describing a function

The kn func info command prints information about a deployed function, such as the function name, image, namespace, Knative service information, route information, and event subscriptions.

Procedure

  • Describe a function:

    $ kn func info [-f <format> -n <namespace> -p <path>]

    Example command

    $ kn func info -p function/example-function

    Example output

    Function name:
      example-function
    Function is built in image:
      docker.io/user/example-function:latest
    Function is deployed as Knative Service:
      example-function
    Function is deployed in namespace:
      default
    Routes:
      http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com

14.6.6. Emitting a test event to a deployed function

You can use the kn func emit CLI command to emit a CloudEvent to a function that is either deployed locally or deployed to your OpenShift Container Platform cluster. This command can be used to test that a function is working and able to receive events correctly.

Example command

$ kn func emit

The kn func emit command executes on the local directory by default, and assumes that this directory is a function project.

14.6.6.1. kn func emit optional parameters

You can specify optional parameters for the emitted CloudEvent by using the kn func emit CLI command flags.

List of flags from --help command output

Flags:
  -c, --content-type string   The MIME Content-Type for the CloudEvent data  (Env: $FUNC_CONTENT_TYPE) (default "application/json")
  -d, --data string           Any arbitrary string to be sent as the CloudEvent data. Ignored if --file is provided  (Env: $FUNC_DATA)
  -f, --file string           Path to a local file containing CloudEvent data to be sent  (Env: $FUNC_FILE)
  -h, --help                  help for emit
  -i, --id string             CloudEvent ID (Env: $FUNC_ID) (default "306bd6a0-0b0a-48ba-b187-b633571d072a")
  -p, --path string           Path to the project directory. Ignored when --sink is provided (Env: $FUNC_PATH) (default "/home/lanceball/src/github.com/nodeshift/opossum")
  -k, --sink string           Send the CloudEvent to the function running at [sink]. The special value "local" can be used to send the event to a function running on the local host. When provided, the --path flag is ignored  (Env: $FUNC_SINK)
  -s, --source string         CloudEvent source (Env: $FUNC_SOURCE) (default "/boson/fn")
  -t, --type string           CloudEvent type  (Env: $FUNC_TYPE) (default "boson.fn")

In particular, you might find it useful to specify the following parameters:

Event type
The type of event being emitted. You can find information about the type parameter that is set for events from a certain event producer in the documentation for that event producer. For example, the API server source may set the type parameter of produced events as dev.knative.apiserver.resource.update.
Event source
The unique event source that produced the event. This may be a URI for the event source, for example https://10.96.0.1/, or the name of the event source.
Event ID
A random, unique ID that is created by the event producer.
Event data

Allows you to specify a data value for the event sent by the kn func emit command. For example, you can specify a --data value such as "Hello world!" so that the event contains this data string. By default, no data is included in the events created by kn func emit.

Note

Functions that have been deployed to a cluster can respond to events from an existing event source that provides values for properties such as source and type. These events often have a data value in JSON form, which captures the domain specific context of the event. Using the CLI flags noted in this document, developers can simulate those events for local testing.

You can also send event data using the --file flag to provide a local file containing data for the event.

Data content type
If you are using the --data flag to add data for events, you can also specify what type of data is carried by the event, by using the --content-type flag. In the previous example, the data is plain text, so you might specify kn func emit --data "Hello world!" --content-type "text/plain".

Example commands specifying event parameters by using flags

$ kn func emit --type <event_type> --source <event_source> --data <event_data> --content-type <content_type> -i <event_ID>

$ kn func emit --type ping --source example-ping --data "Hello world!" --content-type "text/plain" -i example-ID

Example commands specifying a file on disk that contains the event parameters

$ kn func emit --file <path>

$ kn func emit --file ./test.json

Example commands specifying a path to the function

You can specify a path to the function project by using the --path flag, or specify an endpoint for the function by using the --sink flag:

$ kn func emit --path <path_to_function>
$ kn func emit --path ./example/example-function

Example commands specifying a function deployed as a Knative service (sink)

$ kn func emit --sink <service_URL>

$ kn func emit --sink "http://example.function.com"

The --sink flag also accepts the special value local to send an event to a function running locally:

$ kn func emit --sink local

14.6.7. Deleting a function

You can delete a function from your cluster by using the kn func delete command.

Procedure

  • Delete a function:

    $ kn func delete [<function_name> -n <namespace> -p <path>]
    • If the name or path of the function to delete is not specified, the current directory is searched for a func.yaml file that is used to determine the function to delete.
    • If the namespace is not specified, it defaults to the namespace value in the func.yaml file.