11.4. 使用 PingSource

PingSource 用于定期向事件消费者发送带有恒定有效负载的 ping 事件。

PingSource 可用于调度发送事件,类似于计时器,如示例所示:

apiVersion: sources.knative.dev/v1alpha2
kind: PingSource
metadata:
  name: test-ping-source
spec:
  schedule: "*/2 * * * *" 1
  jsonData: '{"message": "Hello world!"}' 2
  sink: 3
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: event-display
1
事件指定的调度使用CRON 格式
2
事件消息正文以 JSON 编码的数据字符串表示。
3
这些是事件消费者的详情。在这个示例中,我们使用名为 event-display 的 Knative 服务。

11.4.1. 通过 kn CLI 使用 PingSource

以下小节介绍了如何使用 kn CLI 创建、验证和移除基本 PingSource。

先决条件

  • 已安装 Knative Serving 和 Eventing。
  • 已安装 kn CLI。

流程

  1. 要验证 PingSource 是否可以工作,请创建一个简单的 Knative 服务,用于在服务日志中转储传入的信息:

    $ kn service create event-display \
        --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
  2. 对于您要请求的每一组 ping 事件,请在与事件消费者相同的命名空间中创建一个 PingSource:

    $ kn source ping create test-ping-source \
        --schedule "*/2 * * * *" \
        --data '{"message": "Hello world!"}' \
        --sink svc:event-display
  3. 输入以下命令并检查输出,检查是否正确映射了控制器:

    $ kn source ping describe test-ping-source
    Name:         test-ping-source
    Namespace:    default
    Annotations:  sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer
    Age:          15s
    Schedule:     */2 * * * *
    Data:         {"message": "Hello world!"}
    
    Sink:
      Name:       event-display
      Namespace:  default
      Resource:   Service (serving.knative.dev/v1)
    
    Conditions:
      OK TYPE                 AGE REASON
      ++ Ready                 8s
      ++ Deployed              8s
      ++ SinkProvided         15s
      ++ ValidSchedule        15s
      ++ EventTypeProvided    15s
      ++ ResourcesCorrect     15s

验证步骤

您可以通过查看 sink pod 的日志来验证 Kubernetes 事件是否已发送到 Knative 事件。

默认情况下,如果在 60 秒内都没有流量,Knative 服务会终止其 Pod。本指南中演示的示例创建了一个 PingSource,每 2 分钟发送一条消息,因此每个消息都应该在新创建的 pod 中观察到。

  1. 查看新创建的 pod:

    $ watch oc get pods
  2. 使用 Ctrl+C 取消查看 pod,然后查看所创建 pod 的日志:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container
    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.sources.ping
      source: /apis/v1/namespaces/default/pingsources/test-ping-source
      id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9
      time: 2020-04-07T16:16:00.000601161Z
      datacontenttype: application/json
    Data,
      {
        "message": "Hello world!"
      }

11.4.1.1. 删除 PingSource

  1. 删除 PingSource:

    $ kn delete pingsources.sources.knative.dev test-ping-source
  2. 删除 event-display 服务:

    $ kn delete service.serving.knative.dev event-display
    ----
    
    :leveloffset: 2
    :leveloffset: +1
    
    // Module included in the following assemblies:
    //
    // * serverless/knative_eventing/serverless-pingsource.adoc
    
    [id="serverless-pingsource-yaml_{context}"]
    = Using a PingSource with YAML
    
    The following sections describe how to create, verify and remove a basic PingSource using YAML files.
    
    .Prerequisites
    
    * You have Knative Serving and Eventing installed.
    
    [NOTE]
    ====
    The following procedure requires you to create YAML files.
    
    If you change the names of the YAML files from those used in the examples, you must ensure that you also update the corresponding CLI commands.
    ====
    
    .Procedure
    
    . To verify that the PingSource is working, create a simple Knative
    service that dumps incoming messages to the service's logs.
    .. Copy the example YAML into a file named `service.yaml`:
    +
    
    [source,yaml]
    ----
    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: event-display
    spec:
      template:
        spec:
          containers:
            - image: quay.io/openshift-knative/knative-eventing-sources-event-display:v0.13.2
    ----
    
    .. Create the service:
    +
    
    [source,terminal]
    ----
    $ oc apply --filename service.yaml
    ----
    
    . For each set of ping events that you want to request, create a PingSource in the same namespace as the event consumer.
    .. Copy the example YAML into a file named `ping-source.yaml`:
    +
    
    [source,yaml]
    ----
    apiVersion: sources.knative.dev/v1alpha2
    kind: PingSource
    metadata:
      name: test-ping-source
    spec:
      schedule: "*/2 * * * *"
      jsonData: '{"message": "Hello world!"}'
      sink:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: event-display
    ----
    
    .. Create the PingSource:
    +
    
    [source,terminal]
    ----
    $ oc apply --filename ping-source.yaml
    ----
    
    . Check that the controller is mapped correctly by entering the following command:
    +
    
    [source,terminal]
    ----
    $ oc get pingsource.sources.knative.dev test-ping-source -oyaml
    ----
    
    +
    .Example output
    +
    
    [source,terminal]
    ----
    apiVersion: sources.knative.dev/v1alpha2
    kind: PingSource
    metadata:
      annotations:
        sources.knative.dev/creator: developer
        sources.knative.dev/lastModifier: developer
      creationTimestamp: "2020-04-07T16:11:14Z"
      generation: 1
      name: test-ping-source
      namespace: default
      resourceVersion: "55257"
      selfLink: /apis/sources.knative.dev/v1alpha2/namespaces/default/pingsources/test-ping-source
      uid: 3d80d50b-f8c7-4c1b-99f7-3ec00e0a8164
    spec:
      jsonData: '{ value: "hello" }'
      schedule: '*/2 * * * *'
      sink:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: event-display
          namespace: default
    ----
    
    .Verfication steps
    
    You can verify that the Kubernetes events were sent to the Knative event sink by looking at the sink pod's logs.
    
    By default, Knative services terminate their pods if no traffic is received within a 60 second period.
    The example shown in this guide creates a PingSource that sends a message every 2 minutes, so each message should be observed in a newly created pod.
    
    . Watch for new pods created:
    +
    
    [source,terminal]
    ----
    $ watch oc get pods
    ----
    
    . Cancel watching the pods using Ctrl+C, then look at the logs of the created pod:
    +
    
    [source,terminal]
    ----
    $ oc logs $(oc get pod -o name | grep event-display) -c user-container
    ----
    
    +
    .Example output
    +
    
    [source,terminal]
    ----
    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.sources.ping
      source: /apis/v1/namespaces/default/pingsources/test-ping-source
      id: 042ff529-240e-45ee-b40c-3a908129853e
      time: 2020-04-07T16:22:00.000791674Z
      datacontenttype: application/json
    Data,
      {
        "message": "Hello world!"
      }
    ----
    
    [id="pingsource-remove-yaml_{context}"]
    == Remove the PingSource
    
    . Delete the service by entering the following command:
    +
    
    [source,terminal]
    ----
    $ oc delete --filename service.yaml
    ----
    
    . Delete the PingSource by entering the following command:
    +
    
    [source,terminal]
    ----
    $ oc delete --filename ping-source.yaml
    ----
    
    :leveloffset: 3
    
    :leveloffset!:
    
    :leveloffset: +1
    
    // Standard document attributes to be used in the documentation
    //
    // The following are shared by all documents:
    :toclevels: 4
    :experimental:
    //
    // Product content attributes, that is, substitution variables in the files.
    //
    :ServerlessProductName: OpenShift Serverless
    :ServerlessProductShortName: Serverless
    :ServerlessOperatorName: OpenShift Serverless Operator
    //
    // Documentation publishing attributes used in the master-docinfo.xml file
    // Note that the DocInfoProductName generates the URL for the product page.
    // Changing the value changes the generated URL.
    //
    :DocInfoProductName: OpenShift Serverless
    //
    // Book Names:
    //     Defining the book names in document attributes instead of hard-coding them in
    //     the master.adoc files and in link references. This makes it easy to change the
    //     book name if necessary.
    //     Using the pattern ending in 'BookName' makes it easy to grep for occurrences
    //     throughout the topics
    //
    [id="metering-serverless"]
    = Using metering with {ServerlessProductName}
    :context: metering-serverless
    :experimental:
    :imagesdir: images
    :prewrap!:
    :op-system-first: Red Hat Enterprise Linux CoreOS (RHCOS)
    :op-system: RHCOS
    :asb-name: OpenShift Ansible Broker
    :tsb-name: Template Service Broker
    :kebab: image:kebab.png[title="Options menu"]
    :rh-openstack-first: Red Hat OpenStack Platform (RHOSP)
    :rh-openstack: RHOSP
    :cloud-redhat-com: Red Hat OpenShift Cluster Manager
    
    
    As a cluster administrator, you can use metering to analyze what is happening in your {ServerlessProductName} cluster.
    
    For more information about metering on {product-title}, see link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.3/html-single/metering/#about-metering[About metering].
    
    [id="installing-metering-serverless_{context}"]
    == Installing metering
    For information about installing metering on {product-title}, see link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.3/html-single/metering/#installing-metering[Installing Metering].
    
    :leveloffset: +1
    
    // Module included in the following assemblies:
    // serverless-metering.adoc
    
    [id="datasources-metering-serverless_{context}"]
    = Datasources for Knative Serving metering
    The following `ReportDataSources` are examples of how Knative Serving can be used with {product-title} metering.
    
    [id="knative-service-cpu-usage-ds_{context}"]
    == Datasource for CPU usage in Knative Serving
    This datasource provides the accumulated CPU seconds used per Knative service over the report time period.
    
    .YAML file
    [source,yaml]
    ----
    apiVersion: metering.openshift.io/v1
    kind: ReportDataSource
    metadata:
      name: knative-service-cpu-usage
    spec:
      prometheusMetricsImporter:
        query: >
          sum
              by(namespace,
                 label_serving_knative_dev_service,
                 label_serving_knative_dev_revision)
              (
                label_replace(rate(container_cpu_usage_seconds_total{container!="POD",container!="",pod!=""}[1m]), "pod", "$1", "pod", "(.*)")
                *
                on(pod, namespace)
                group_left(label_serving_knative_dev_service, label_serving_knative_dev_revision)
                kube_pod_labels{label_serving_knative_dev_service!=""}
              )
    ----
    
    [id="knative-service-memory-usage-ds_{context}"]
    == Datasource for memory usage in Knative Serving
    This datasource provides the average memory consumption per Knative service over the report time period.
    
    .YAML file
    [source,yaml]
    ----
    apiVersion: metering.openshift.io/v1
    kind: ReportDataSource
    metadata:
      name: knative-service-memory-usage
    spec:
      prometheusMetricsImporter:
        query: >
          sum
              by(namespace,
                 label_serving_knative_dev_service,
                 label_serving_knative_dev_revision)
              (
                label_replace(container_memory_usage_bytes{container!="POD", container!="",pod!=""}, "pod", "$1", "pod", "(.*)")
                *
                on(pod, namespace)
                group_left(label_serving_knative_dev_service, label_serving_knative_dev_revision)
                kube_pod_labels{label_serving_knative_dev_service!=""}
              )
    ----
    
    [id="applying-datasources-knative_{context}"]
    == Applying Datasources for Knative Serving metering
    You can apply the `ReportDataSources` by using the following command:
    ----
    $ oc apply -f <datasource-name>.yaml
    ----
    .Example
    ----
    $ oc apply -f knative-service-memory-usage.yaml
    ----
    
    :leveloffset: 3
    :leveloffset: +1
    
    // Module included in the following assemblies:
    // serverless-metering.adoc
    
    [id="queries-metering-serverless_{context}"]
    = Queries for Knative Serving metering
    The following `ReportQuery` resources reference the example `DataSources` provided.
    
    [id="knative-service-cpu-usage-query_{context}"]
    == Query for CPU usage in Knative Serving
    
    .YAML file
    
    [source,yaml]
    ----
    apiVersion: metering.openshift.io/v1
    kind: ReportQuery
    metadata:
      name: knative-service-cpu-usage
    spec:
      inputs:
      - name: ReportingStart
        type: time
      - name: ReportingEnd
        type: time
      - default: knative-service-cpu-usage
        name: KnativeServiceCpuUsageDataSource
        type: ReportDataSource
      columns:
      - name: period_start
        type: timestamp
        unit: date
      - name: period_end
        type: timestamp
        unit: date
      - name: namespace
        type: varchar
        unit: kubernetes_namespace
      - name: service
        type: varchar
      - name: data_start
        type: timestamp
        unit: date
      - name: data_end
        type: timestamp
        unit: date
      - name: service_cpu_seconds
        type: double
        unit: cpu_core_seconds
      query: |
        SELECT
          timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start,
          timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end,
          labels['namespace'] as project,
          labels['label_serving_knative_dev_service'] as service,
          min("timestamp") as data_start,
          max("timestamp") as data_end,
          sum(amount * "timeprecision") AS service_cpu_seconds
        FROM {| dataSourceTableName .Report.Inputs.KnativeServiceCpuUsageDataSource |}
        WHERE "timestamp" >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}'
        AND "timestamp" < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}'
        GROUP BY labels['namespace'],labels['label_serving_knative_dev_service']
    ----
    
    [id="knative-service-memory-usage-query_{context}"]
    == Query for memory usage in Knative Serving
    
    .YAML file
    
    [source,yaml]
    ----
    apiVersion: metering.openshift.io/v1
    kind: ReportQuery
    metadata:
      name: knative-service-memory-usage
    spec:
      inputs:
      - name: ReportingStart
        type: time
      - name: ReportingEnd
        type: time
      - default: knative-service-memory-usage
        name: KnativeServiceMemoryUsageDataSource
        type: ReportDataSource
      columns:
      - name: period_start
        type: timestamp
        unit: date
      - name: period_end
        type: timestamp
        unit: date
      - name: namespace
        type: varchar
        unit: kubernetes_namespace
      - name: service
        type: varchar
      - name: data_start
        type: timestamp
        unit: date
      - name: data_end
        type: timestamp
        unit: date
      - name: service_usage_memory_byte_seconds
        type: double
        unit: byte_seconds
      query: |
        SELECT
          timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start,
          timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end,
          labels['namespace'] as project,
          labels['label_serving_knative_dev_service'] as service,
          min("timestamp") as data_start,
          max("timestamp") as data_end,
          sum(amount * "timeprecision") AS service_usage_memory_byte_seconds
        FROM {| dataSourceTableName .Report.Inputs.KnativeServiceMemoryUsageDataSource |}
        WHERE "timestamp" >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}'
        AND "timestamp" < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}'
        GROUP BY labels['namespace'],labels['label_serving_knative_dev_service']
    ----
    
    [id="applying-queries-knative_{context}"]
    == Applying Queries for Knative Serving metering
    
    . Apply the `ReportQuery` by entering the following command:
    +
    
    [source,terminal]
    ----
    $ oc apply -f <query-name>.yaml
    ----
    
    +
    .Example command
    +
    
    [source,terminal]
    ----
    $ oc apply -f knative-service-memory-usage.yaml
    ----
    
    :leveloffset: 3
    :leveloffset: +1
    
    // Module included in the following assemblies:
    // serverless-metering.adoc
    
    [id="reports-metering-serverless_{context}"]
    = Metering reports for Knative Serving
    
    You can run metering reports against Knative Serving by creating `Report` resources.
    Before you run a report, you must modify the input parameter within the `Report` resource to specify the start and end dates of the reporting period.
    
    .YAML file
    
    [source,yaml]
    ----
    apiVersion: metering.openshift.io/v1
    kind: Report
    metadata:
      name: knative-service-cpu-usage
    spec:
      reportingStart: '2019-06-01T00:00:00Z' 1
      reportingEnd: '2019-06-30T23:59:59Z' 2
      query: knative-service-cpu-usage 3
    runImmediately: true
    ----
    
    <1> Start date of the report, in ISO 8601 format.
    <2> End date of the report, in ISO 8601 format.
    <3> Either `knative-service-cpu-usage` for CPU usage report or `knative-service-memory-usage` for a memory usage report.
    
    [id="reports-metering-serverless-run_{context}"]
    == Running a metering report
    
    . Run the report by entering the following command:
    +
    
    [source,terminal]
    ----
    $ oc apply -f <report-name>.yml
    ----
    
    . You can then check the report by entering the following command:
    +
    
    [source,terminal]
    ----
    $ oc get report
    ----
    
    +
    .Example output
    +
    
    [source,terminal]
    ----
    NAME                        QUERY                       SCHEDULE   RUNNING    FAILED   LAST REPORT TIME       AGE
    knative-service-cpu-usage   knative-service-cpu-usage              Finished            2019-06-30T23:59:59Z   10h
    ----
    
    :leveloffset: 3
    
    :leveloffset: 3