Chapter 7. Using the AMQ Streams Kafka Bridge

This chapter provides an overview of the AMQ Streams Kafka Bridge and helps you get started using its REST API to interact with AMQ Streams.

Note

For the full list of REST API endpoints and descriptions, including example requests and responses, see Kafka Bridge API reference. For information on how to deploy and configure the Kafka Bridge, see Section 2.7, “Kafka Bridge”.

7.1. Overview of the AMQ Streams Kafka Bridge

The AMQ Streams Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster running on OpenShift. The API enables such clients to produce and consume messages without the requirement to use the native Kafka protocol.

The API has two main resources — consumers and topics — that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka.

You can:

  • Send messages to a topic.
  • Create and delete consumers.
  • Subscribe consumers to topics, so that they start receiving messages from those topics.
  • Unsubscribe consumers from topics.
  • Assign partitions to consumers.
  • Retrieve messages from topics.
  • Commit a list of consumer offsets.
  • Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position.

Similar to a Kafka Connect cluster, you can deploy the Kafka Bridge into your OpenShift cluster using the Cluster Operator. For deployment instructions, see Section 2.7, “Kafka Bridge”.

After the Kafka Bridge is deployed, the Cluster Operator creates a Deployment, Service, and Pod in your OpenShift cluster, each named strimzi-kafka-bridge by default.

7.2. Supported clients for the AMQ Streams Kafka Bridge

You can use the Kafka Bridge to integrate both internal and external HTTP client applications with your Kafka cluster.

  • Internal clients are container-based HTTP clients running in the same OpenShift cluster as the Kafka Bridge itself.
  • External clients are HTTP clients running outside the OpenShift cluster in which the Kafka Bridge is deployed and running.

Internal clients can access the Kafka Bridge on the host and port defined in the KafkaBridge custom resource. External clients can access the Kafka Bridge through an OpenShift Route, a LoadBalancer Service, or a Kubernetes Ingress.

Additional resources

7.3. Securing the AMQ Streams Kafka Bridge

AMQ Streams does not currently provide any encryption, authentication, or authorization for the Kafka Bridge. This means that requests sent from external clients to the Kafka Bridge are:

  • Not encrypted, and must use HTTP rather than HTTPS
  • Sent without authentication

However, you can secure the Kafka Bridge using other methods, such as:

  • OpenShift Network Policies that define which pods can access the Kafka Bridge.
  • Reverse proxies with authentication or authorization, for example, OAuth2 proxies.
  • API Gateways.
  • Kubernetes Ingress or OpenShift Routes with TLS termination.

The Kafka Bridge supports TLS encryption and TLS and SASL authentication when connecting to the Kafka Brokers. Within your OpenShift cluster, you can configure:

  • TLS or SASL-based authentication between the Kafka Bridge and your Kafka cluster
  • A TLS-encrypted connection between the Kafka Bridge and your Kafka cluster.

For more information, see Section 3.5.4.1, “Authentication support in Kafka Bridge”.

You can use ACLs in Kafka brokers to restrict the topics that can be consumed and produced using the Kafka Bridge.

7.4. Accessing the AMQ Streams Kafka Bridge from outside of OpenShift

After deployment, the AMQ Streams Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. These applications use the kafka-bridge-name-bridge-service Service to access the API.

If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by using one of the following features:

  • Kubernetes Services of types LoadBalancer or NodePort
  • Kubernetes Ingress resources
  • OpenShift Routes

If you decide to create Services, use the following labels in the selector to configure the pods to which the service will route the traffic:

  # ...
  selector:
    strimzi.io/cluster: kafka-bridge-name 1
    strimzi.io/kind: KafkaBridge
  #...
1
Name of the Kafka Bridge custom resource in your OpenShift cluster.

7.5. Requests to the AMQ Streams Kafka Bridge

7.5.1. Data formats and headers

Specify data formats and HTTP headers to ensure valid requests are submitted to the Kafka Bridge.

7.5.1.1. Content Type headers

API request and response bodies are always encoded as JSON.

  • When performing consumer operations, POST requests must provide the following Content-Type header:

    Content-Type: application/vnd.kafka.v2+json
  • When performing producer operations, POST requests must provide Content-Type headers specifying the desired embedded data format, either json or binary, as shown in the following table.

    Embedded data formatContent-Type header

    JSON

    Content-Type: application/vnd.kafka.json.v2+json

    Binary

    Content-Type: application/vnd.kafka.binary.v2+json

You set the embedded data format when creating a consumer using the consumers/groupid endpoint—​for more information, see the next section.

7.5.1.2. Embedded data format

The embedded data format is the format of the Kafka messages that are transmitted, over HTTP, from a producer to a consumer using the Kafka Bridge. Two embedded data formats are supported: JSON and binary.

When creating a consumer using the /consumers/groupid endpoint, the POST request body must specify an embedded data format of either JSON or binary. This is specified in the format field, for example:

{
  "name": "my-consumer",
  "format": "binary", 1
...
}
1
A binary embedded data format.

The embedded data format specified when creating a consumer must match the data format of the Kafka messages it will consume.

If you choose to specify a binary embedded data format, subsequent producer requests must provide the binary data in the request body as Base64-encoded strings. For example, when sending messages using the /topics/topicname endpoint, records.value must be encoded in Base64:

{
  "records": [
    {
      "key": "my-key",
      "value": "ZWR3YXJkdGhldGhyZWVsZWdnZWRjYXQ="
    },
  ]
}

Producer requests must also provide a Content-Type header that corresponds to the embedded data format, for example, Content-Type: application/vnd.kafka.binary.v2+json.

7.5.1.3. Accept headers

After creating a consumer, all subsequent GET requests must provide an Accept header in the following format:

Accept: application/vnd.kafka.embedded-data-format.v2+json

The embedded-data-format is either json or binary.

For example, when retrieving records for a subscribed consumer using an embedded data format of JSON, include this Accept header:

Accept: application/vnd.kafka.json.v2+json

7.6. AMQ Streams Kafka Bridge API resources

For the full list of REST API endpoints and descriptions, including example requests and responses, see Kafka Bridge API reference.