Chapter 3. Deployment configuration

This chapter describes how to configure different aspects of the supported deployments:

  • Kafka clusters
  • Kafka Connect clusters
  • Kafka Connect clusters with Source2Image support
  • Kafka Mirror Maker

3.1. Kafka cluster configuration

The full schema of the Kafka resource is described in the Section B.1, “Kafka schema reference”. All labels that are applied to the desired Kafka resource will also be applied to the OpenShift resources making up the Kafka cluster. This provides a convenient mechanism for those resources to be labelled in whatever way the user requires.

3.1.1. Kafka and Zookeeper storage

Kafka brokers and Zookeeper are stateful applications. They need to store data on disks. AMQ Streams allows you to configure the type of storage, which they want to use for Kafka and Zookeeper. Storage configuration is mandatory and has to be specified in every Kafka resource.

Storage can be configured using the storage property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper

AMQ Streams supports two types of storage:

  • Ephemeral
  • Persistent

The type of storage is specified in the type field.

Important

Once the Kafka cluster is deployed, the storage cannot be changed.

3.1.1.1. Ephemeral storage

Ephemeral storage uses the `emptyDir` volumes to store data. To use ephemeral storage, the type field should be set to ephemeral.

Important

EmptyDir volumes are not persistent and the data stored in them will be lost when the Pod is restarted. After the new pod is started, it has to recover all data from other nodes of the cluster. Ephemeral storage is not suitable for use with single node Zookeeper clusters and for Kafka topics with replication factor 1, because it will lead to data loss.

An example of Ephemeral storage

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    storage:
      type: ephemeral
    # ...
  zookeeper:
    # ...
    storage:
      type: ephemeral
    # ...

3.1.1.2. Persistent storage

Persistent storage uses Persistent Volume Claims to provision persistent volumes for storing data. Persistent Volume Claims can be used to provision volumes of many different types, depending on the Storage Class which will provision the volume. The data types which can be used with persistent volume claims include many types of SAN storage as well as Local persistent volumes.

To use persistent storage, the type has to be set to persistent-claim. Persistent storage supports additional configuration options:

size (required)
Defines the size of the persistent volume claim, for example, "1000Gi".
class (optional)
The OpenShift Storage Class to use for dynamic volume provisioning.
selector (optional)
Allows selecting a specific persistent volume to use. It contains a matchLabels field which contains key:value pairs representing labels for selecting such a volume.
delete-claim (optional)
Boolean value which specifies if the Persistent Volume Claim has to be deleted when the cluster is undeployed. Default is false.
Warning

Resizing persistent storage for existing AMQ Streams clusters is not currently supported. You must decide the necessary storage size before deploying the cluster.

Example fragment of persistent storage configuration with 1000Gi size

# ...
storage:
  type: persistent-claim
  size: 1000Gi
# ...

The following example demonstrates the use of a storage class.

Example fragment of persistent storage configuration with specific Storage Class

# ...
storage:
  type: persistent-claim
  size: 1Gi
  class: my-storage-class
# ...

Finally, a selector can be used to select a specific labeled persistent volume to provide needed features such as an SSD.

Example fragment of persistent storage configuration with selector

# ...
storage:
  type: persistent-claim
  size: 1Gi
  selector:
    matchLabels:
      "hdd-type": "ssd"
  deleteClaim: true
# ...

Persistent Volume Claim naming

When the persistent storage is used, it will create Persistent Volume Claims with the following names:

data-cluster-name-kafka-idx
Persistent Volume Claim for the volume used for storing data for the Kafka broker pod idx.
data-cluster-name-zookeeper-idx
Persistent Volume Claim for the volume used for storing data for the Zookeeper node pod idx.

Additional resources

3.1.2. Replicas

A Kafka cluster can run with many brokers. You can configure the number of brokers used for the Kafka cluster in Kafka.spec.kafka.replicas. The best number of brokers for your cluster has to be determined based on your specific use case.

3.1.2.1. Configuring the number of broker nodes

This procedure describes how to configure the number of Kafka broker nodes in a new cluster. It only applies to new clusters, with no partitions. If your cluster already has topics defined you should see Section 3.1.20, “Scaling clusters”.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • A Kafka cluster with no topics defined yet

Procedure

  1. Edit the replicas property in the Kafka resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        replicas: 3
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

Additional resources

If your cluster already has topics defined see Section 3.1.20, “Scaling clusters”.

3.1.3. Kafka broker configuration

AMQ Streams allows you to customize the configuration of Apache Kafka brokers. You can specify and configure most of the options listed in Apache Kafka documentation.

The only options which cannot be configured are those related to the following areas:

  • Security (Encryption, Authentication, and Authorization)
  • Listener configuration
  • Broker ID configuration
  • Configuration of log data directories
  • Inter-broker communication
  • Zookeeper connectivity

These options are automatically configured by AMQ Streams.

3.1.3.1. Kafka broker configuration

Kafka broker can be configured using the config property in Kafka.spec.kafka.

This property should contain the Kafka broker configuration options as keys. The values could be in one of the following JSON types:

  • String
  • Number
  • Boolean

Users can specify and configure the options listed in Apache Kafka documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

  • listeners
  • advertised.
  • broker.
  • listener.
  • host.name
  • port
  • inter.broker.listener.name
  • sasl.
  • ssl.
  • security.
  • password.
  • principal.builder.class
  • log.dir
  • zookeeper.connect
  • zookeeper.set.acl
  • authorizer.
  • super.user

When one of the forbidden options is present in the config property, it will be ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka.

Important

The Cluster Operator does not validate keys or values in the provided config object. When invalid configuration is provided, the Kafka cluster might not start or might become unstable. In such cases, the configuration in the Kafka.spec.kafka.config object should be fixed and the cluster operator will roll out the new configuration to all Kafka brokers.

An example showing Kafka broker configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    config:
      num.partitions: 1
      num.recovery.threads.per.data.dir: 1
      default.replication.factor: 3
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 1
      log.retention.hours: 168
      log.segment.bytes: 1073741824
      log.retention.check.interval.ms: 300000
      num.network.threads: 3
      num.io.threads: 8
      socket.send.buffer.bytes: 102400
      socket.receive.buffer.bytes: 102400
      socket.request.max.bytes: 104857600
      group.initial.rebalance.delay.ms: 0
    # ...

3.1.3.2. Configuring Kafka brokers

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the config property in the Kafka resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        config:
          default.replication.factor: 3
          offsets.topic.replication.factor: 3
          transaction.state.log.replication.factor: 3
          transaction.state.log.min.isr: 1
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.4. Kafka broker listeners

AMQ Streams allows users to configure the listeners which will be enabled in Kafka brokers. Two types of listeners are supported:

  • Plain listener on port 9092 (without encryption)
  • TLS listener on port 9093 (with encryption)

3.1.4.1. Mutual TLS authentication for clients

3.1.4.1.1. Mutual TLS authentication

Mutual authentication or two-way authentication is when both the server and the client present certificates. AMQ Streams can configure Kafka to use TLS (Transport Layer Security) to provide encrypted communication between Kafka brokers and clients either with or without mutual authentication. When you configure mutual authentication, the broker authenticates the client and the client authenticates the broker. Mutual TLS authentication is always used for the communication between Kafka brokers and Zookeeper pods.

Note

In many common uses of TLS (such as the HTTPS protocol used between a web browser and a web server) the authentication is not mutual: Only one party to the communication gets proof of the identity of the other party.

TLS authentication is more commonly one-way, where only one party authenticates to another. For example, when the HTTPS protocol is used between a web browser and a web server, the authentication is not usually mutual and only the server gets proof of the identity of the browser.

3.1.4.1.2. When to use mutual TLS authentication for clients

Mutual TLS authentication is recommended for authenticating Kafka clients when:

  • The client supports authentication using mutual TLS authentication
  • It is necessary to use the TLS certificates rather than passwords
  • You can reconfigure and restart client applications periodically so that they do not use expired certificates.

3.1.4.2. SCRAM-SHA authentication

SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish mutual authentication using passwords. AMQ Streams can configure Kafka to use SASL SCRMA-SHA-512 to provide authentication on both unencrypted and TLS-encrypted client connections. TLS authentication is always used internally between Kafka brokers and Zookeeper nodes. When used with a TLS client connection, the TLS protocol provides encryption, but is not used for authentication.

The following properties of SCRAM make it safe to use SCRAM-SHA even on unencrypted connections:

  • The passwords are not sent in the clear over the communication channel. Instead the client and the server are each challenged by the other to offer proof that they know the password of the authenticating user.
  • The server and client each generate a new challenge one each authentication exchange. This means that the exchange is resilient against replay attacks.
3.1.4.2.1. Supported SCRAM credentials

AMQ Streams supports SCRMA-SHA-512 only. When a KafkaUser.spec.authentication.type is configured with scram-sha-512 the User Operator will generate a random 12 character password consisting of upper and lowercase ASCII letters and numbers.

3.1.4.2.2. When to use SCRAM-SHA authentication for clients

SCRAM-SHA is recommended for authenticating Kafka clients when:

  • The client supports authentication using SCRAM-SHA-512
  • It is necessary to use passwords rather than the TLS certificates
  • When you want to have authentication for unencrypted communication

3.1.4.3. Kafka listeners

You can configure Kafka broker listeners using the listeners property in the Kafka.spec.kafka resource. The listeners property contains three sub-properties:

  • plain
  • tls
  • external

When none of these properties are defined, the listener will be disabled.

An example of listeners property with all listeners enabled

# ...
listeners:
  plain: {}
  tls: {}
# ...

An example of listeners property with only the plain listener enabled

# ...
listeners:
  plain: {}
# ...

3.1.4.3.1. External listener

The external listener is used to connect to a Kafka cluster from outside of an OpenShift environment. AMQ Streams supports three types of external listeners:

  • route
  • loadbalancer
  • nodeport

Exposing Kafka using OpenShift Routes

An external listener of type route exposes Kafka by using OpenShift Routes and the HAProxy router. A dedicated Route is created for every Kafka broker pod. An additional Route is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes to connect to Kafka on port 443.

When exposing Kafka using OpenShift Routes, TLS encryption is always used.

For more information on using Routes to access Kafka, see Section 3.1.4.5, “Accessing Kafka using OpenShift routes”.

Exposing Kafka using loadbalancers

External listeners of type loadbalancer expose Kafka by using Loadbalancer type Services. A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to connections on port 9094.

By default, TLS encryption is enabled. To disable it, set the tls field to false.

For more information on using loadbalancers to access Kafka, see Section 3.1.4.6, “Accessing Kafka using loadbalancers routes”.

Exposing Kafka using node ports

External listeners of type nodeport expose Kafka by using NodePort type Services. When exposing Kafka in this way, Kafka clients connect directly to the nodes of OpenShift. You must enable access to the ports on the OpenShift nodes for each client (for example, in firewalls or security groups). Each Kafka broker pod is then accessible on a separate port. Additional NodePort type Service is created to serve as a Kafka bootstrap address.

When configuring the advertised addresses for the Kafka broker pods, AMQ Streams uses the address of the node on which the given pod is running. When selecting the node address, the different address types are used with the following priority:

  1. ExternalDNS
  2. ExternalIP
  3. Hostname
  4. InternalDNS
  5. InternalIP

By default, TLS encryption is enabled. To disable it, set the tls field to false.

Note

TLS hostname verification is not currently supported when exposing Kafka clusters using node ports.

For more information on using node ports to access Kafka, see Section 3.1.4.7, “Accessing Kafka using node ports routes”.

3.1.4.3.2. Listener authentication

The listener sub-properties can also contain additional configuration. Both listeners support the authentication property. This is used to specify an authentication mechanism specific to that listener:

  • mutual TLS authentication (only on the listeners with TLS encryption)
  • SCRAM-SHA authentication

If no authentication property is specified then the listener does not authenticate clients which connect though that listener.

An example where the plain listener is configured for SCRAM-SHA authentication and the tls listener with mutual TLS authentication

# ...
listeners:
  plain:
    authentication:
      type: scram-sha-512
  tls:
    authentication:
      type: tls
  external:
    type: loadbalancer
    tls: true
    authentication:
      type: tls
# ...

Authentication must be configured when using the User Operator to manage KafkaUsers.

3.1.4.4. Configuring Kafka listeners

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the listeners property in the Kafka.spec.kafka resource.

An example configuration of the plain (unencrypted) listener without authentication:

+

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
spec:
  kafka:
    # ...
    listeners:
      plain: {}
    # ...
  zookeeper:
    # ...
  1. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

Additional resources

3.1.4.5. Accessing Kafka using OpenShift routes

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Deploy Kafka cluster with an external listener enabled and configured to the type route.

    An example configuration with an external listener configured to use Routes:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          external:
            type: route
            # ...
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    oc apply -f your-file
  3. Find the address of the bootstrap Route.

    oc get routes _cluster-name_-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'

    Use the address together with port 443 in your Kafka client as the bootstrap address.

  4. Extract the public certificate of the broker certification authority

    oc extract secret/_cluster-name_-cluster-ca-cert --keys=ca.crt --to=- > ca.crt

    Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.

Additional resources

3.1.4.6. Accessing Kafka using loadbalancers routes

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Deploy Kafka cluster with an external listener enabled and configured to the type loadbalancer.

    An example configuration with an external listener configured to use loadbalancers:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          external:
            type: loadbalancer
            tls: true
            # ...
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file
  3. Find the hostname of the bootstrap loadbalancer.

    On OpenShift this can be done using oc get:

    oc get service cluster-name-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}{"\n"}'

    If no hostname was found (nothing was returned by the command), use the loadbalancer IP address.

    On OpenShift this can be done using oc get:

    oc get service cluster-name-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'

    Use the hostname or IP address together with port 9094 in your Kafka client as the bootstrap address.

  4. Unless TLS encryption was disabled, extract the public certificate of the broker certification authority.

    On OpenShift this can be done using oc extract:

    oc extract secret/cluster-name-cluster-ca-cert --keys=ca.crt --to=- > ca.crt

    Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.

Additional resources

3.1.4.7. Accessing Kafka using node ports routes

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Deploy Kafka cluster with an external listener enabled and configured to the type nodeport.

    An example configuration with an external listener configured to use node ports:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          external:
            type: nodeport
            tls: true
            # ...
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file
  3. Find the port number of the bootstrap service.

    On OpenShift this can be done using oc get:

    oc get service cluster-name-kafka-external-bootstrap -o=jsonpath='{.spec.ports[0].nodePort}{"\n"}'

    The port should be used in the Kafka bootstrap address.

  4. Find the address of the OpenShift node.

    On OpenShift this can be done using oc get:

    oc get node node-name -o=jsonpath='{range .status.addresses[*]}{.type}{"\t"}{.address}{"\n"}'

    If several different addresses are returned, select the address type you want based on the following order:

    1. ExternalDNS
    2. ExternalIP
    3. Hostname
    4. InternalDNS
    5. InternalIP

      Use the address with the port found in the previous step in the Kafka bootstrap address.

  5. Unless TLS encryption was disabled, extract the public certificate of the broker certification authority.

    On OpenShift this can be done using oc extract:

    oc extract secret/cluster-name-cluster-ca-cert --keys=ca.crt --to=- > ca.crt

    Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.

Additional resources

3.1.5. Authentication and Authorization

AMQ Streams supports authentication and authorization. Authentication can be configured independently for each listener. Authorization is always configured for the whole Kafka cluster.

3.1.5.1. Authentication

Authentication is configured as part of the listener configuration in the authentication property. When the authentication property is missing, no authentication will be enabled on given listener. The authentication mechanism which will be used is defined by the type field.

The supported authentication mechanisms are:

  • TLS client authentication
  • SASL SCRAM-SHA-512
3.1.5.1.1. TLS client authentication

TLS Client authentication can be enabled by specifying the type as tls. The TLS client authentication is supported only on the tls listener.

An example of authentication with type tls

# ...
authentication:
  type: tls
# ...

3.1.5.2. Configuring authentication in Kafka brokers

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the listeners property in the Kafka.spec.kafka resource. Add the authentication field to the listeners where you want to enable authentication. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          tls:
            authentication:
              type: tls
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

Additional resources

3.1.5.3. Authorization

Authorization can be configured using the authorization property in the Kafka.spec.kafka resource. When the authorization property is missing, no authorization will be enabled. When authorization is enabled it will be applied for all enabled listeners. The authorization method is defined by the type field.

Currently, the only supported authorization method is the Simple authorization.

3.1.5.3.1. Simple authorization

Simple authorization is using the SimpleAclAuthorizer plugin. SimpleAclAuthorizer is the default authorization plugin which is part of Apache Kafka. To enable simple authorization, the type field should be set to simple.

An example of Simple authorization

# ...
authorization:
  type: simple
# ...

3.1.5.4. Configuring authorization in Kafka brokers

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Add or edit the authorization property in the Kafka.spec.kafka resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        authorization:
          type: simple
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

Additional resources

3.1.6. Replicas

Zookeeper clusters or ensembles usually run with an odd number of nodes and always requires the majority of the nodes to be available in order to maintain a quorum. Maintaining a quorum is important because when the Zookeeper cluster loses a quorum, it will stop responding to clients. As a result, a Zookeeper cluster without a quorum will cause the Kafka brokers to stop working as well. This is why having a stable and highly available Zookeeper cluster is very important for AMQ Streams.

A Zookeeper cluster is usually deployed with three, five, or seven nodes.

Three nodes
Zookeeper cluster consisting of three nodes requires at least two nodes to be up and running in order to maintain the quorum. It can tolerate only one node being unavailable.
Five nodes
Zookeeper cluster consisting of five nodes requires at least three nodes to be up and running in order to maintain the quorum. It can tolerate two nodes being unavailable.
Seven nodes
Zookeeper cluster consisting of seven nodes requires at least four nodes to be up and running in order to maintain the quorum. It can tolerate three nodes being unavailable.
Note

For development purposes, it is also possible to run Zookeeper with a single node.

Having more nodes does not necessarily mean better performance, as the costs to maintain the quorum will rise with the number of nodes in the cluster. Depending on your availability requirements, you can decide for the number of nodes to use.

3.1.6.1. Number of Zookeeper nodes

The number of Zookeeper nodes can be configured using the replicas property in Kafka.spec.zookeeper.

An example showing replicas configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
    replicas: 3
    # ...

3.1.6.2. Changing number of replicas

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the replicas property in the Kafka resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
        replicas: 3
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.7. Zookeeper configuration

AMQ Streams allows you to customize the configuration of Apache Zookeeper nodes. You can specify and configure most of the options listed in Zookeeper documentation.

The only options which cannot be configured are those related to the following areas:

  • Security (Encryption, Authentication, and Authorization)
  • Listener configuration
  • Configuration of data directories
  • Zookeeper cluster composition

These options are automatically configured by AMQ Streams.

3.1.7.1. Zookeeper configuration

Zookeeper nodes can be configured using the config property in Kafka.spec.zookeeper. This property should contain the Zookeeper configuration options as keys. The values could be in one of the following JSON types:

  • String
  • Number
  • Boolean

Users can specify and configure the options listed in Zookeeper documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

  • server.
  • dataDir
  • dataLogDir
  • clientPort
  • authProvider
  • quorum.auth
  • requireClientAuthScheme

When one of the forbidden options is present in the config property, it will be ignored and a warning message will be printed to the Custer Operator log file. All other options will be passed to Zookeeper.

Important

The Cluster Operator does not validate keys or values in the provided config object. When invalid configuration is provided, the Zookeeper cluster might not start or might become unstable. In such cases, the configuration in the Kafka.spec.zookeeper.config object should be fixed and the cluster operator will roll out the new configuration to all Zookeeper nodes.

Selected options have default values:

  • timeTick with default value 2000
  • initLimit with default value 5
  • syncLimit with default value 2
  • autopurge.purgeInterval with default value 1

These options will be automatically configured when they are not present in the Kafka.spec.zookeeper.config property.

An example showing Zookeeper configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
spec:
  kafka:
    # ...
  zookeeper:
    # ...
    config:
      autopurge.snapRetainCount: 3
      autopurge.purgeInterval: 1
    # ...

3.1.7.2. Configuring Zookeeper

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the config property in the Kafka resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
        config:
          autopurge.snapRetainCount: 3
          autopurge.purgeInterval: 1
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.8. Entity Operator

The Entity Operator is responsible for managing different entities in a running Kafka cluster. The currently supported entities are:

Kafka topics
managed by the Topic Operator.
Kafka users
managed by the User Operator

Both Topic and User Operators can be deployed on their own. But the easiest way to deploy them is together with the Kafka cluster as part of the Entity Operator. The Entity Operator can include either one or both of them depending on the configuration. They will be automatically configured to manage the topics and users of the Kafka cluster with which they are deployed.

For more information about Topic Operator, see Section 4.2, “Topic Operator”. For more information about how to use Topic Operator to create or delete topics, see Chapter 5, Using the Topic Operator.

3.1.8.1. Configuration

The Entity Operator can be configured using the entityOperator property in Kafka.spec

The entityOperator property supports several sub-properties:

  • tlsSidecar
  • affinity
  • tolerations
  • topicOperator
  • userOperator

The tlsSidecar property can be used to configure the TLS sidecar container which is used to communicate with Zookeeper. For more details about configuring the TLS sidecar, see Section 3.1.16, “TLS sidecar”.

The affinity and tolerations properties can be used to configure how OpenShift schedules the Entity Operator pod. For more details about pod scheduling, see Section 3.1.17, “Configuring pod scheduling”.

The topicOperator property contains the configuration of the Topic Operator. When this option is missing, the Entity Operator will be deployed without the Topic Operator.

The userOperator property contains the configuration of the User Operator. When this option is missing, the Entity Operator will be deployed without the User Operator.

Example of basic configuration enabling both operators

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    topicOperator: {}
    userOperator: {}

When both topicOperator and userOperator properties are missing, the Entity Operator will be not deployed.

3.1.8.1.1. Topic Operator

Topic Operator deployment can be configured using additional options inside the topicOperator object. Following options are supported:

watchedNamespace
The OpenShift namespace in which the topic operator watches for KafkaTopics. Default is the namespace where the Kafka cluster is deployed.
reconciliationIntervalSeconds
The interval between periodic reconciliations in seconds. Default is 90.
zookeeperSessionTimeoutSeconds
The Zookeeper session timeout in seconds. Default is 20 seconds.
topicMetadataMaxAttempts
The number of attempts for getting topics metadata from Kafka. The time between each attempt is defined as an exponential back-off. You might want to increase this value when topic creation could take more time due to its many partitions or replicas. Default is 6.
image
The image property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 3.1.15, “Container images”.
resources
The resources property configures the amount of resources allocated to the Topic Operator For more details about resource request and limit configuration, see Section 3.1.9, “CPU and memory resources”.
logging
The logging property configures the logging of the Topic Operator For more details about logging configuration, see Section 3.1.10, “Logging”.

Example of Topic Operator configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    topicOperator:
      watchedNamespace: my-topic-namespace
      reconciliationIntervalSeconds: 60
    # ...

3.1.8.1.2. User Operator

User Operator deployment can be configured using additional options inside the userOperator object. Following options are supported:

watchedNamespace
The OpenShift namespace in which the topic operator watches for KafkaUsers. Default is the namespace where the Kafka cluster is deployed.
reconciliationIntervalSeconds
The interval between periodic reconciliations in seconds. Default is 120.
zookeeperSessionTimeoutSeconds
The Zookeeper session timeout in seconds. Default is 6 seconds.
image
The image property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 3.1.15, “Container images”.
resources
The resources property configures the amount of resources allocated to the User Operator. For more details about resource request and limit configuration, see Section 3.1.9, “CPU and memory resources”.
logging
The logging property configures the logging of the User Operator. For more details about logging configuration, see Section 3.1.10, “Logging”.

Example of Topic Operator configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    userOperator:
      watchedNamespace: my-user-namespace
      reconciliationIntervalSeconds: 60
    # ...

3.1.8.2. Configuring Entity Operator

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the entityOperator property in the Kafka resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
      entityOperator:
        topicOperator:
          watchedNamespace: my-topic-namespace
          reconciliationIntervalSeconds: 60
        userOperator:
          watchedNamespace: my-user-namespace
          reconciliationIntervalSeconds: 60
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.9. CPU and memory resources

For every deployed container, AMQ Streams allows you to specify the resources which should be reserved for it and the maximum resources that can be consumed by it. AMQ Streams supports two types of resources:

  • Memory
  • CPU

AMQ Streams is using the OpenShift syntax for specifying CPU and memory resources.

3.1.9.1. Resource limits and requests

Resource limits and requests can be configured using the resources property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
3.1.9.1.1. Resource requests

Requests specify the resources that will be reserved for a given container. Reserving the resources will ensure that they are always available.

Important

If the resource request is for more than the available free resources in the OpenShift cluster, the pod will not be scheduled.

Resource requests can be specified in the request property. The resource requests currently supported by AMQ Streams are memory and CPU. Memory is specified under the property memory. CPU is specified under the property cpu.

An example showing resource request configuration

# ...
resources:
  requests:
    cpu: 12
    memory: 64Gi
# ...

It is also possible to specify a resource request just for one of the resources:

An example showing resource request configuration with memory request only

# ...
resources:
  requests:
    memory: 64Gi
# ...

Or:

An example showing resource request configuration with CPU request only

# ...
resources:
  requests:
    cpu: 12
# ...

3.1.9.1.2. Resource limits

Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not be always available. The container can use the resources up to the limit only when they are available. The resource limits should be always higher than the resource requests.

Resource limits can be specified in the limits property. The resource limits currently supported by AMQ Streams are memory and CPU. Memory is specified under the property memory. CPU is specified under the property cpu.

An example showing resource limits configuration

# ...
resources:
  limits:
    cpu: 12
    memory: 64Gi
# ...

It is also possible to specify the resource limit just for one of the resources:

An example showing resource limit configuration with memory request only

# ...
resources:
  limits:
    memory: 64Gi
# ...

Or:

An example showing resource limits configuration with CPU request only

# ...
resources:
  requests:
    cpu: 12
# ...

3.1.9.1.3. Supported CPU formats

CPU requests and limits are supported in the following formats:

  • Number of CPU cores as integer (5 CPU core) or decimal (2.5 CPU core).
  • Number or millicpus / millicores (100m) where 1000 millicores is the same 1 CPU core.

An example of using different CPU units

# ...
resources:
  requests:
    cpu: 500m
  limits:
    cpu: 2.5
# ...

Note

The amount of computing power of 1 CPU core might differ depending on the platform where the OpenShift is deployed.

For more details about the CPU specification, see the Meaning of CPU website.

3.1.9.1.4. Supported memory formats

Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.

  • To specify memory in megabytes, use the M suffix. For example 1000M.
  • To specify memory in gigabytes, use the G suffix. For example 1G.
  • To specify memory in mebibytes, use the Mi suffix. For example 1000Mi.
  • To specify memory in gibibytes, use the Gi suffix. For example 1Gi.

An example of using different memory units

# ...
resources:
  requests:
    memory: 512Mi
  limits:
    memory: 2Gi
# ...

For more details about the memory specification and additional supported units, see the Meaning of memory website.

3.1.9.1.5. Additional resources

3.1.9.2. Configuring resource requests and limits

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the resources property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        resources:
          requests:
            cpu: "8"
            memory: 64Gi
          limits:
            cpu: "12"
            memory: 128Gi
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

Additional resources

3.1.10. Logging

Logging enables you to diagnose error and performance issues of AMQ Streams. For the logging, various logger implementations are used. Kafka and Zookeeper use log4j logger and Topic Operator, User Operator, and other components use log4j2 logger.

This section provides information about different loggers and describes how to configure log levels.

You can set the log levels by specifying the loggers and their levels directly (inline) or by using a custom (external) config map.

3.1.10.1. Using inline logging setting

Procedure

  1. Edit the YAML file to specify the loggers and their level for the required components. For example:

    apiVersion: {KafkaApiVersion}
    kind: Kafka
    spec:
      kafka:
        # ...
        logging:
          type: inline
          loggers:
           logger.name: "INFO"
        # ...

    In the above example, the log level is set to INFO. You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. For more information about the log levels, see log4j manual.

  2. Create or update the Kafka resource in OpenShift.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.10.2. Using external ConfigMap for logging setting

Procedure

  1. Edit the YAML file to specify the name of the ConfigMap which should be used for the required components. For example:

    apiVersion: {KafkaApiVersion}
    kind: Kafka
    spec:
      kafka:
        # ...
        logging:
          type: external
          name: customConfigMap
        # ...

    Remember to place your custom ConfigMap under log4j.properties eventually log4j2.properties key.

  2. Create or update the Kafka resource in OpenShift.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.10.3. Loggers

AMQ Streams consists of several components. Each component has its own loggers and is configurable. This section provides information about loggers of various components.

Components and their loggers are listed below.

  • Kafka

    • kafka.root.logger.level
    • log4j.logger.org.I0Itec.zkclient.ZkClient
    • log4j.logger.org.apache.zookeeper
    • log4j.logger.kafka
    • log4j.logger.org.apache.kafka
    • log4j.logger.kafka.request.logger
    • log4j.logger.kafka.network.Processor
    • log4j.logger.kafka.server.KafkaApis
    • log4j.logger.kafka.network.RequestChannel$
    • log4j.logger.kafka.controller
    • log4j.logger.kafka.log.LogCleaner
    • log4j.logger.state.change.logger
    • log4j.logger.kafka.authorizer.logger
  • Zookeeper

    • zookeeper.root.logger
  • Kafka Connect and Kafka Connect with Source2Image support

    • connect.root.logger.level
    • log4j.logger.org.apache.zookeeper
    • log4j.logger.org.I0Itec.zkclient
    • log4j.logger.org.reflections
  • Kafka Mirror Maker

    • mirrormaker.root.logger
  • Topic Operator

    • rootLogger.level
  • User Operator

    • rootLogger.level

3.1.11. Kafka rack awareness

The rack awareness feature in AMQ Streams helps to spread the Kafka broker pods and Kafka topic replicas across different racks. Enabling rack awareness helps to improve availability of Kafka brokers and the topics they are hosting.

Note

"Rack" might represent an availability zone, data center, or an actual rack in your data center.

3.1.11.1. Configuring rack awareness in Kafka brokers

Kafka rack awareness can be configured in the rack property of Kafka.spec.kafka. The rack object has one mandatory field named topologyKey. This key needs to match one of the labels assigned to the OpenShift cluster nodes. The label is used by OpenShift when scheduling the Kafka broker pods to nodes. If the OpenShift cluster is running on a cloud provider platform, that label should represent the availability zone where the node is running. Usually, the nodes are labeled with failure-domain.beta.kubernetes.io/zone that can be easily used as the topologyKey value. This has the effect of spreading the broker pods across zones, and also setting the brokers' broker.rack configuration parameter inside Kafka broker.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Consult your OpenShift administrator regarding the node label that represent the zone / rack into which the node is deployed.
  2. Edit the rack property in the Kafka resource using the label as the topology key.

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        rack:
          topologyKey: failure-domain.beta.kubernetes.io/zone
        # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

Additional Resources

3.1.12. Healthchecks

Healthchecks are periodical tests which verify that the application’s health. When the Healthcheck fails, OpenShift can assume that the application is not healthy and attempt to fix it. OpenShift supports two types of Healthcheck probes:

  • Liveness probes
  • Readiness probes

For more details about the probes, see Configure Liveness and Readiness Probes. Both types of probes are used in AMQ Streams components.

Users can configure selected options for liveness and readiness probes

3.1.12.1. Healthcheck configurations

Liveness and readiness probes can be configured using the livenessProbe and readinessProbe properties in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

Both livenessProbe and readinessProbe support two additional options:

  • initialDelaySeconds
  • timeoutSeconds

The initialDelaySeconds property defines the initial delay before the probe is tried for the first time. Default is 15 seconds.

The timeoutSeconds property defines timeout of the probe. Default is 5 seconds.

An example of liveness and readiness probe configuration

# ...
readinessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
livenessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
# ...

3.1.12.2. Configuring healthchecks

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the livenessProbe or readinessProbe property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        readinessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.13. Prometheus metrics

AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and Zookeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404.

3.1.13.1. Metrics configuration

Prometheus metrics can be enabled by configuring the metrics property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

When the metrics property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}).

Example of enabling metrics without any further configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics: {}
    # ...
  zookeeper:
    # ...

The metrics property might contain additional configuration for the Prometheus JMX exporter.

Example of enabling metrics with additional Prometheus JMX Exporter configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics:
      lowercaseOutputName: true
      rules:
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count"
          name: "kafka_server_$1_$2_total"
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count"
          name: "kafka_server_$1_$2_total"
          labels:
            topic: "$3"
    # ...
  zookeeper:
    # ...

3.1.13.2. Configuring Prometheus metrics

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the metrics property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
        metrics:
          lowercaseOutputName: true
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.14. JVM Options

Apache Kafka and Apache Zookeeper are running inside of a Java Virtual Machine (JVM). JVM has many configuration options to optimize the performance for different platforms and architectures. AMQ Streams allows configuring some of these options.

3.1.14.1. JVM configuration

JVM options can be configured using the jvmOptions property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

Only a selected subset of available JVM options can be configured. The following options are supported:

-Xms and -Xmx

-Xms configures the minimum initial allocation heap size when the JVM starts. -Xmx configures the maximum heap size.

Note

The units accepted by JVM settings such as -Xmx and -Xms are those accepted by the JDK java binary in the corresponding image. Accordingly, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes

The default values used for -Xms and -Xmx depends on whether there is a memory request limit configured for the container:

  • If there is a memory limit then the JVM’s minimum and maximum memory will be set to a value corresponding to the limit.
  • If there is no memory limit then the JVM’s minimum memory will be set to 128M and the JVM’s maximum memory will not be defined. This allows for the JVM’s memory to grow as-needed, which is ideal for single node environments in test and development.
Important

Setting -Xmx explicitly requires some care:

  • The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by -Xmx.
  • If -Xmx is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it).
  • If -Xmx is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if -Xms is set to -Xmx, or some later time if not).

When setting -Xmx explicitly, it is recommended to:

  • set the memory request and the memory limit to the same value,
  • use a memory request that is at least 4.5 × the -Xmx,
  • consider setting -Xms to the same value as -Xms.
Important

Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.

Example fragment configuring -Xmx and -Xms

# ...
jvmOptions:
  "-Xmx": "2g"
  "-Xms": "2g"
# ...

In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.

Setting the same value for initial (-Xms) and maximum (-Xmx) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and Zookeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation will be multiplied by the number of consumers.

-server

-server enables the server JVM. This option can be set to true or false.

Example fragment configuring -server

# ...
jvmOptions:
  "-server": true
# ...

Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

-XX

-XX object can be used for configuring advanced runtime options of a JVM. The -server and -XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka.

Example showing the use of the -XX object

jvmOptions:
  "-XX":
    "UseG1GC": true,
    "MaxGCPauseMillis": 20,
    "InitiatingHeapOccupancyPercent": 35,
    "ExplicitGCInvokesConcurrent": true,
    "UseParNewGC": false

The example configuration above will result in the following JVM options:

-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

3.1.14.2. Configuring JVM options

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the jvmOptions property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        jvmOptions:
          "-Xmx": "8g"
          "-Xms": "8g"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.15. Container images

AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.

3.1.15.1. Container image configurations

Container image which should be used for given components can be specified using the image property in:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The image specified in the component-specific custom resource will be used during deployment. If the image field is missing, the image specified in the Cluster Operator configuration will be used. If the image name is not defined in the Cluster Operator configuration, then the default value will be used.

  • For Kafka brokers:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka:latest container image.
  • For Kafka broker TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-stunnel:latest container image.
  • For Zookeeper nodes:

    1. Container image specified in the STRIMZI_DEFAULT_ZOOKEEPER_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/zookeeper:latest container image.
  • For Zookeeper node TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/zookeeper-stunnel:latest container image.
  • For Topic Operator:

    1. Container image specified in the STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
  • For User Operator:

    1. Container image specified in the STRIMZI_DEFAULT_USER_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/user-operator:latest container image.
  • For Entity Operator TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/entity-operator-stunnel:latest container image.
  • For Kafka Connect:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-connect:latest container image.
  • For Kafka Connect with Source2image support:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-connect-s2i:latest container image.
Warning

Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.

Example of container image configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    image: my-org/my-image:latest
    # ...
  zookeeper:
    # ...

3.1.15.2. Configuring container images

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the image property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        image: my-org/my-image:latest
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.16. TLS sidecar

Sidecar is a container which is running in a pod and serves an auxiliary purpose. The purpose of the TLS sidecar is to encrypt or decrypt the communication between AMQ Streams components and Zookeeper since Zookeeper does not support TLS encryption natively. Zookeeper does not support TLS encryption natively. Therefore AMQ Streams uses the sidecar to add the TLS support.

The TLS sidecar is currrently being used in:

  • Kafka brokers
  • Zookeeper
  • Entity Operator

3.1.16.1. TLS sidecar configuration

The TLS sidecar can be configured using the tlsSidecar property in:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator

The TLS sidecar supports three additional options:

  • image
  • resources
  • logLevel

The resources property can be used to specify the memory and CPU resources allocated for the TLS sidecar.

The image property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 3.1.15, “Container images”.

The logLevel property is used to specify the logging level. Following logging levels are supported:

  • emerg
  • alert
  • crit
  • err
  • warning
  • notice
  • info
  • debug

The default value is notice.

Example of TLS sidecar configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    tlsSidecar:
      image: my-org/my-image:latest
      resources:
        requests:
          cpu: 200m
          memory: 64Mi
        limits:
          cpu: 500m
          memory: 128Mi
      logLevel: debug
    # ...
  zookeeper:
    # ...

3.1.16.2. Configuring TLS sidecar

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the tlsSidecar property in the Kafka resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        tlsSidecar:
          resources:
            requests:
              cpu: 200m
              memory: 64Mi
            limits:
              cpu: 500m
              memory: 128Mi
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.17. Configuring pod scheduling

Important

When two application are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.

3.1.17.1. Scheduling pods based on other applications

3.1.17.1.1. Avoid critical applications to share the node

Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.

3.1.17.1.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.1.17.1.3. Configuring pod anti-affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the affinity property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. The topologyKey should be set to kubernetes.io/hostname to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        affinity:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                    - key: application
                      operator: In
                      values:
                        - postgresql
                        - mongodb
                topologyKey: "kubernetes.io/hostname"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.17.2. Scheduling pods to specific nodes

3.1.17.2.1. Node scheduling

The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.

OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type or custom labels to select the right node.

3.1.17.2.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.1.17.2.3. Configuring node affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Label the nodes where AMQ Streams components should be scheduled.

    On OpenShift this can be done using oc label:

    oc label node your-node node-type=fast-network

    Alternatively, some of the existing labels might be reused.

  2. Edit the affinity property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
                - matchExpressions:
                  - key: node-type
                    operator: In
                    values:
                    - fast-network
        # ...
      zookeeper:
        # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.17.3. Using dedicated nodes

3.1.17.3.1. Dedicated nodes

Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.

Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.

To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.

3.1.17.3.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.1.17.3.3. Tolerations

Tolerations ca be configured using the tolerations property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The format of the tolerations property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.

3.1.17.3.4. Setting up dedicated nodes and scheduling pods on them

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Select the nodes which should be used as dedicated
  2. Make sure there are no workloads scheduled on these nodes
  3. Set the taints on the selected nodes

    On OpenShift this can be done using oc adm taint:

    oc adm taint node your-node dedicated=Kafka:NoSchedule
  4. Additionally, add a label to the selected nodes as well.

    On OpenShift this can be done using oc label:

    oc label node your-node dedicated=Kafka
  5. Edit the affinity and tolerations properties in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        tolerations:
          - key: "dedicated"
            operator: "Equal"
            value: "Kafka"
            effect: "NoSchedule"
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: dedicated
                  operator: In
                  values:
                  - Kafka
        # ...
      zookeeper:
        # ...
  6. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.1.18. Performing a rolling update of a Kafka cluster

This procedure describes how to manually trigger a rolling update of an existing Kafka cluster by using an OpenShift annotation.

Prerequisites

  • A running Kafka cluster.
  • A running Cluster Operator.

Procedure

  1. Find the name of the StatefulSet that controls the Kafka pods you want to manually update.

    For example, if your Kafka cluster is named my-cluster, the corresponding StatefulSet is named my-cluster-kafka.

  2. Annotate a StatefulSet resource in OpenShift.

    + On OpenShift, use oc annotate:

    oc annotate statefulset cluster-name-kafka operator.strimzi.io/manual-rolling-update=true
  3. Wait for the next reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated StatefulSet is triggered, as long as the annotation was detected by the reconciliation process. Once the rolling update of all the pods is complete, the annotation is removed from the StatefulSet.

Additional resources

3.1.19. Performing a rolling update of a Zookeeper cluster

This procedure describes how to manually trigger a rolling update of an existing Zookeeper cluster by using an OpenShift annotation.

Prerequisites

  • A running Zookeeper cluster.
  • A running Cluster Operator.

Procedure

  1. Find the name of the StatefulSet that controls the Zookeeper pods you want to manually update.

    For example, if your Kafka cluster is named my-cluster, the corresponding StatefulSet is named my-cluster-zookeeper.

  2. Annotate a StatefulSet resource in OpenShift.

    + On OpenShift, use oc annotate:

    oc annotate statefulset cluster-name-zookeeper operator.strimzi.io/manual-rolling-update=true
  3. Wait for the next reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated StatefulSet is triggered, as long as the annotation was detected by the reconciliation process. Once the rolling update of all the pods is complete, the annotation is removed from the StatefulSet.

Additional resources

3.1.20. Scaling clusters

3.1.20.1. Scaling Kafka clusters

3.1.20.1.1. Adding brokers to a cluster

The primary way of increasing throughput for a topic is to increase the number of partitions for that topic. That works because the extra partitions allow the load of the topic to be shared between the different brokers in the cluster. However, in situations where every broker is constrained by a particular resource (typically I/O) using more partitions will not result in increased throughput. Instead, you need to add brokers to the cluster.

When you add an extra broker to the cluster, Kafka does not assign any partitions to it automatically. You must decide which partitions to move from the existing brokers to the new broker.

Once the partitions have been redistributed between all the brokers, the resource utilization of each broker should be reduced.

3.1.20.1.2. Removing brokers from a cluster

Because AMQ Streams uses StatefulSets to manage broker pods, you cannot remove any pod from the cluster. You can only remove one or more of the highest numbered pods from the cluster. For example, in a cluster of 12 brokers the pods are named cluster-name-kafka-0 up to cluster-name-kafka-11. If you decide to scale down by one broker, the cluster-name-kafka-11 will be removed.

Before you remove a broker from a cluster, ensure that it is not assigned to any partitions. You should also decide which of the remaining brokers will be responsible for each of the partitions on the broker being decommissioned. Once the broker has no assigned partitions, you can scale the cluster down safely.

3.1.20.2. Partition reassignment

The Topic Operator does not currently support reassigning replicas to different brokers, so it is necessary to connect directly to broker pods to reassign replicas to brokers.

Within a broker pod, the kafka-reassign-partitions.sh utility allows you to reassign partitions to different brokers.

It has three different modes:

--generate
Takes a set of topics and brokers and generates a reassignment JSON file which will result in the partitions of those topics being assigned to those brokers. Because this operates on whole topics, it cannot be used when you just need to reassign some of the partitions of some topics.
--execute
Takes a reassignment JSON file and applies it to the partitions and brokers in the cluster. Brokers that gain partitions as a result become followers of the partition leader. For a given partition, once the new broker has caught up and joined the ISR (in-sync replicas) the old broker will stop being a follower and will delete its replica.
--verify
Using the same reassignment JSON file as the --execute step, --verify checks whether all of the partitions in the file have been moved to their intended brokers. If the reassignment is complete, --verify also removes any throttles that are in effect. Unless removed, throttles will continue to affect the cluster even after the reassignment has finished.

It is only possible to have one reassignment running in a cluster at any given time, and it is not possible to cancel a running reassignment. If you need to cancel a reassignment, wait for it to complete and then perform another reassignment to revert the effects of the first reassignment. The kafka-reassign-partitions.sh will print the reassignment JSON for this reversion as part of its output. Very large reassignments should be broken down into a number of smaller reassignments in case there is a need to stop in-progress reassignment.

3.1.20.2.1. Reassignment JSON file

The reassignment JSON file has a specific structure:

{
  "version": 1,
  "partitions": [
    <PartitionObjects>
  ]
}

Where <PartitionObjects> is a comma-separated list of objects like:

{
  "topic": <TopicName>,
  "partition": <Partition>,
  "replicas": [ <AssignedBrokerIds> ]
}
Note

Although Kafka also supports a "log_dirs" property this should not be used in Red Hat AMQ Streams.

The following is an example reassignment JSON file that assigns topic topic-a, partition 4 to brokers 2, 4 and 7, and topic topic-b partition 2 to brokers 1, 5 and 7:

{
  "version": 1,
  "partitions": [
    {
      "topic": "topic-a",
      "partition": 4,
      "replicas": [2,4,7]
    },
    {
      "topic": "topic-b",
      "partition": 2,
      "replicas": [1,5,7]
    }
  ]
}

Partitions not included in the JSON are not changed.

3.1.20.3. Generating reassignment JSON files

This procedure describes how to generate a reassignment JSON file that reassigns all the partitions for a given set of topics using the kafka-reassign-partitions.sh tool.

Prerequisites

  • A running Cluster Operator
  • A Kafka resource
  • A set of topics to reassign the partitions of

Procedure

  1. Prepare a JSON file named topics.json that lists the topics to move. It must have the following structure:

    {
      "version": 1,
      "topics": [
        <TopicObjects>
      ]
    }

    where <TopicObjects> is a comma-separated list of objects like:

    {
      "topic": <TopicName>
    }

    For example if you want to reassign all the partitions of topic-a and topic-b, you would need to prepare a topics.json file like this:

    {
      "version": 1,
      "topics": [
        { "topic": "topic-a"},
        { "topic": "topic-b"}
      ]
    }
  2. Copy the topics.json file to one of the broker pods:

    On OpenShift:

    cat topics.json | oc rsh -c kafka <BrokerPod> /bin/bash -c \
      'cat > /tmp/topics.json'
  3. Use the kafka-reassign-partitions.sh` command to generate the reassignment JSON.

    On OpenShift:

    oc rsh -c kafka <BrokerPod> \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --topics-to-move-json-file /tmp/topics.json \
      --broker-list <BrokerList> \
      --generate

    For example, to move all the partitions of topic-a and topic-b to brokers 4 and 7

    oc rsh -c kafka _<BrokerPod>_ \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --topics-to-move-json-file /tmp/topics.json \
      --broker-list 4,7 \
      --generate

3.1.20.4. Creating reassignment JSON files manually

You can manually create the reassignment JSON file if you want to move specific partitions.

3.1.20.5. Reassignment throttles

Partition reassignment can be a slow process because it involves transferring large amounts of data between brokers. To avoid a detrimental impact on clients, you can throttle the reassignment process. This might cause the reassignment to take longer to complete.

  • If the throttle is too low then the newly assigned brokers will not be able to keep up with records being published and the reassignment will never complete.
  • If the throttle is too high then clients will be impacted.

For example, for producers, this could manifest as higher than normal latency waiting for acknowledgement. For consumers, this could manifest as a drop in throughput caused by higher latency between polls.

3.1.20.6. Scaling up a Kafka cluster

This procedure describes how to increase the number of brokers in a Kafka cluster.

Prerequisites

  • An existing Kafka cluster.
  • A reassignment JSON file named reassignment.json that describes how partitions should be reassigned to brokers in the enlarged cluster.

Procedure

  1. Add as many new brokers as you need by increasing the Kafka.spec.kafka.replicas configuration option.
  2. Verify that the new broker pods have started.
  3. Copy the reassignment.json file to the broker pod on which you will later execute the commands:

    On OpenShift:

    cat reassignment.json | \
      oc rsh -c kafka broker-pod /bin/bash -c \
      'cat > /tmp/reassignment.json'

    For example:

    cat reassignment.json | \
      oc rsh -c kafka my-cluster-kafka-0 /bin/bash -c \
      'cat > /tmp/reassignment.json'
  4. Execute the partition reassignment using the kafka-reassign-partitions.sh command line tool from the same broker pod.

    On OpenShift:

    oc rsh -c kafka broker-pod \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --execute

    If you are going to throttle replication you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example:

    On OpenShift:

    oc rsh -c kafka my-cluster-kafka-0 \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --throttle 5000000 \
      --execute

    This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a local file (not a file in the pod) in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file.

  5. If you need to change the throttle during reassignment you can use the same command line with a different throttled rate. For example:

    On OpenShift:

    oc rsh -c kafka my-cluster-kafka-0 \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --throttle 10000000 \
      --execute
  6. Periodically verify whether the reassignment has completed using the kafka-reassign-partitions.sh command line tool from any of the broker pods. This is the same command as the previous step but with the --verify option instead of the --execute option.

    On OpenShift:

    oc rsh -c kafka broker-pod \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --verify

    For example, on {OpenShift},

    oc rsh -c kafka my-cluster-kafka-0 \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --verify
  7. The reassignment has finished when the --verify command reports each of the partitions being moved as completed successfully. This final --verify will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers.

3.1.20.7. Scaling down a Kafka cluster

Additional resources

This procedure describes how to decrease the number of brokers in a Kafka cluster.

Prerequisites

  • An existing Kafka cluster.
  • A reassignment JSON file named reassignment.json describing how partitions should be reassigned to brokers in the cluster once the broker(s) in the highest numbered Pod(s) have been removed.

Procedure

  1. Copy the reassignment.json file to the broker pod on which you will later execute the commands:

    On OpenShift:

    cat reassignment.json | \
      oc rsh -c kafka broker-pod /bin/bash -c \
      'cat > /tmp/reassignment.json'

    For example:

    cat reassignment.json | \
      oc rsh -c kafka my-cluster-kafka-0 /bin/bash -c \
      'cat > /tmp/reassignment.json'
  2. Execute the partition reassignment using the kafka-reassign-partitions.sh command line tool from the same broker pod.

    On OpenShift:

    oc rsh -c kafka broker-pod \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --execute

    If you are going to throttle replication you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example:

    On OpenShift:

    oc rsh -c kafka my-cluster-kafka-0 \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --throttle 5000000 \
      --execute

    This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a local file (not a file in the pod) in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file.

  3. If you need to change the throttle during reassignment you can use the same command line with a different throttled rate. For example:

    On OpenShift:

    oc rsh -c kafka my-cluster-kafka-0 \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --throttle 10000000 \
      --execute
  4. Periodically verify whether the reassignment has completed using the kafka-reassign-partitions.sh command line tool from any of the broker pods. This is the same command as the previous step but with the --verify option instead of the --execute option.

    On OpenShift:

    oc rsh -c kafka broker-pod \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --verify

    For example, on {OpenShift},

    oc rsh -c kafka my-cluster-kafka-0 \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --verify
  5. The reassignment has finished when the --verify command reports each of the partitions being moved as completed successfully. This final --verify will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers.
  6. Once all the partition reassignments have finished, the broker(s) being removed should not have responsibility for any of the partitions in the cluster. You can verify this by checking that the broker’s data log directory does not contain any live partition logs. If the log directory on the broker contains a directory that does not match the extended regular expression \.[a-z0-9]-delete$ then the broker still has live partitions and it should not be stopped.

    You can check this by executing the command:

    oc rsh <BrokerN> -c kafka /bin/bash -c \
      "ls -l /var/lib/kafka/kafka-log_<N>_ | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\.[a-z0-9]+-delete$'"

    where N is the number of the Pod(s) being deleted.

    If the above command prints any output then the broker still has live partitions. In this case, either the reassignment has not finished, or the reassignment JSON file was incorrect.

  7. Once you have confirmed that the broker has no live partitions you can edit the Kafka.spec.kafka.replicas of your Kafka resource, which will scale down the StatefulSet, deleting the highest numbered broker Pod(s).

3.1.21. Deleting Kafka nodes manually

Additional resources

This procedure describes how to delete an existing Kafka node by using an OpenShift annotation. Deleting a Kafka node consists of deleting both the Pod on which the Kafka broker is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically.

Warning

Deleting a PersistentVolumeClaim can cause permanent data loss. The following procedure should only be performed if you have encountered storage issues.

Prerequisites

  • A running Kafka cluster.
  • A running Cluster Operator.

Procedure

  1. Find the name of the Pod that you want to delete.

For example, if the cluster is named cluster-name, the pods are named cluster-name-kafka-index, where index starts at zero and ends at the total number of replicas.

  1. Annotate the Pod resource in OpenShift.

    + On OpenShift use oc annotate:

    oc annotate pod cluster-name-kafka-index operator.strimzi.io/delete-pod-and-pvc=true
  2. Wait for the next reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated.

Additional resources

3.1.22. Deleting Zookeeper nodes manually

This procedure describes how to delete an existing Zookeeper node by using an OpenShift annotation. Deleting a Zookeeper node consists of deleting both the Pod on which Zookeeper is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically.

Warning

Deleting a PersistentVolumeClaim can cause permanent data loss. The following procedure should only be performed if you have encountered storage issues.

Prerequisites

  • A running Zookeeper cluster.
  • A running Cluster Operator.

Procedure

  1. Find the name of the Pod that you want to delete.

For example, if the cluster is named cluster-name, the pods are named cluster-name-zookeeper-index, where index starts at zero and ends at the total number of replicas.

  1. Annotate the Pod resource in OpenShift.

    + On OpenShift use oc annotate:

    oc annotate pod cluster-name-zookeeper-index operator.strimzi.io/delete-pod-and-pvc=true
  2. Wait for the next reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated.

Additional resources

3.1.23. List of resources created as part of Kafka cluster

The following resources will created by the Cluster Operator in the OpenShift cluster:

cluster-name-kafka
StatefulSet which is in charge of managing the Kafka broker pods.
cluster-name-kafka-brokers
Service needed to have DNS resolve the Kafka broker pods IP addresses directly.
cluster-name-kafka-bootstrap
Service can be used as bootstrap servers for Kafka clients.
cluster-name-kafka-external-bootstrap
Bootstrap service for clients connecting from outside of the OpenShift cluster. This resource will be created only when external listener is enabled.
cluster-name-kafka-pod-id
Service used to route traffic from outside of the OpenShift cluster to individual pods. This resource will be created only when external listener is enabled.
cluster-name-kafka-external-bootstrap
Bootstrap route for clients connecting from outside of the OpenShift cluster. This resource will be created only when external listener is enabled and set to type route.
cluster-name-kafka-pod-id
Route for traffic from outside of the OpenShift cluster to individual pods. This resource will be created only when external listener is enabled and set to type route.
cluster-name-kafka-config
ConfigMap which contains the Kafka ancillary configuration and is mounted as a volume by the Kafka broker pods.
cluster-name-kafka-brokers
Secret with Kafka broker keys.
cluster-name-kafka
Service account used by the Kafka brokers.
strimzi-namespace-name-cluster-name-kafka-init
Cluster role binding used by the Kafka brokers.
cluster-name-zookeeper
StatefulSet which is in charge of managing the Zookeeper node pods.
cluster-name-zookeeper-nodes
Service needed to have DNS resolve the Zookeeper pods IP addresses directly.
cluster-name-zookeeper-client
Service used by Kafka brokers to connect to Zookeeper nodes as clients.
cluster-name-zookeeper-config
ConfigMap which contains the Zookeeper ancillary configuration and is mounted as a volume by the Zookeeper node pods.
cluster-name-zookeeper-nodes
Secret with Zookeeper node keys.
cluster-name-entity-operator
Deployment with Topic and User Operators. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-topic-operator-config
Configmap with ancillary configuration for Topic Operators. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-user-operator-config
Configmap with ancillary configuration for User Operators. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-operator-certs
Secret with Entitiy operators keys for communication with Kafka and Zookeeper. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-operator
Service account used by the Entity Operator.
strimzi-cluster-name-topic-operator
Role binding used by the Entity Operator.
strimzi-cluster-name-user-operator
Role binding used by the Entity Operator.
cluster-name-cluster-ca
Secret with the Cluster CA used to encrypt the cluster communication.
cluster-name-cluster-ca-cert
Secret with the Cluster CA public key. This key can be used to verify the identity of the Kafka brokers.
cluster-name-clients-ca
Secret with the Clients CA used to encrypt the communication between Kafka brokers and Kafka clients.
cluster-name-clients-ca-cert
Secret with the Clients CA public key. This key can be used to verify the identity of the Kafka brokers.
data-cluster-name-kafka-idx
Persistent Volume Claim for the volume used for storing data for the Kafka broker pod idx. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data.
data-cluster-name-zookeeper-idx
Persistent Volume Claim for the volume used for storing data for the Zookeeper node pod idx. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data.

3.2. Kafka Connect cluster configuration

The full schema of the KafkaConnect resource is described in the Section B.29, “KafkaConnect schema reference”. All labels that are applied to the desired KafkaConnect resource will also be applied to the OpenShift resources making up the Kafka Connect cluster. This provides a convenient mechanism for those resources to be labelled in whatever way the user requires.

3.2.1. Replicas

Kafka Connect clusters can run with a different number of nodes. The number of nodes is defined in the KafkaConnect and KafkaConnectS2I resources. Running Kafka Connect cluster with multiple nodes can provide better availability and scalability. However, when running Kafka Connect on OpenShift it is not absolutely necessary to run multiple nodes of Kafka Connect for high availability. When the node where Kafka Connect is deployed to crashes, OpenShift will automatically take care of rescheduling the Kafka Connect pod to a different node. However, running Kafka Connect with multiple nodes can provide faster failover times, because the other nodes will be already up and running.

3.2.1.1. Configuring the number of nodes

Number of Kafka Connect nodes can be configured using the replicas property in KafkaConnect.spec and KafkaConnectS2I.spec.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the replicas property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnectS2I
    metadata:
      name: my-cluster
    spec:
      # ...
      replicas: 3
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.2. Bootstrap servers

Kafka Connect cluster always works together with a Kafka cluster. The Kafka cluster is specified in the form of a list of bootstrap servers. On OpenShift, the list must ideally contain the Kafka cluster bootstrap service which is named cluster-name-kafka-bootstrap and a port of 9092 for plain traffic or 9093 for encrypted traffic.

The list of bootstrap servers can be configured in the bootstrapServers property in KafkaConnect.spec and KafkaConnectS2I.spec. The servers should be a comma-separated list containing one or more Kafka brokers or a service pointing to Kafka brokers specified as a hostname:_port_ pairs.

When using Kafka Connect with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of a given cluster.

3.2.2.1. Configuring bootstrap servers

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the bootstrapServers property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnect
    metadata:
      name: my-cluster
    spec:
      # ...
      bootstrapServers: my-cluster-kafka-bootstrap:9092
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.3. Connecting to Kafka brokers using TLS

By default, Kafka Connect will try to connect to Kafka brokers using a plain text connection. If you would prefer to use TLS additional configuration will be necessary.

3.2.3.1. TLS support in Kafka Connect

TLS support is configured in the tls property in KafkaConnect.spec and KafkaConnectS2I.spec. The tls property contains a list of secrets with key names under which the certificates are stored. The certificates should be stored in X509 format.

An example showing TLS configuration with multiple certificates

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  tls:
    trustedCertificates:
      - secretName: my-secret
        certificate: ca.crt
      - secretName: my-other-secret
        certificate: certificate.crt
  # ...

When multiple certificates are stored in the same secret, it can be listed multiple times.

An example showing TLS configuration with multiple certificates from the same secret

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnectS2I
metadata:
  name: my-cluster
spec:
  # ...
  tls:
    trustedCertificates:
      - secretName: my-secret
        certificate: ca.crt
      - secretName: my-secret
        certificate: ca2.crt
  # ...

3.2.3.2. Configuring TLS in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Find out the name of the secret with the certificate which should be used for TLS Server Authentication and the key under which the certificate is stored in the secret. If such secret does not exist yet, prepare the certificate in a file and create the secret.

    On OpenShift this can be done using oc create:

    oc create secret generic my-secret --from-file=my-file.crt
  2. Edit the tls property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      tls:
        trustedCertificates:
          - secretName: my-cluster-cluster-cert
            certificate: ca.crt
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.4. Connecting to Kafka brokers with Authentication

By default, Kafka Connect will try to connect to Kafka brokers without any authentication. Authentication can be enabled in the KafkaConnect and KafkaConnectS2I resources.

3.2.4.1. Authentication support in Kafka Connect

Authentication can be configured in the authentication property in KafkaConnect.spec and KafkaConnectS2I.spec. The authentication property specifies the type of the authentication mechanisms which should be used and additional configuration details depending on the mechanism. The currently supported authentication types are:

  • TLS client authentication
  • SASL based authentication using SCRAM-SHA-512 mechanism
3.2.4.1.1. TLS Client Authentication

To use the TLS client authentication, set the type property to the value tls. TLS client authentication is using TLS certificate to authenticate. The certificate has to be specified in the certificateAndKey property. It is always loaded from an OpenShift secret. Inside the secret, it has to be stored in the X509 format under two different keys: for public and private keys.

Note

TLS client authentication can be used only with TLS connections. For more details about TLS configuration in Kafka Connect see Section 3.2.3, “Connecting to Kafka brokers using TLS”.

An example showing TLS client authentication configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  authentication:
    type: tls
    certificateAndKey:
      secretName: my-secret
      certificate: public.crt
      key: private.key
  # ...

3.2.4.1.2. SCRAM-SHA-512 authentication

To use the authentication using the SCRAM-SHA-512 SASL mechanism, set the type property to the value scram-sha-512. SCRAM-SHA-512 uses a username and password to authenticate. Specify the username in the username property. Specify the password as a link to a Secret containing the password in the passwordSecret property. It has to specify the name of the Secret containing the password and the name of the key under which the password is stored inside the Secret.

An example showing SCRAM-SHA-512 client authentication configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  authentication:
    type: scram-sha-512
    username: my-connect-user
    passwordSecret:
      secretName: my-connect-user
      password: password
  # ...

3.2.4.2. Configuring TLS client authentication in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Find out the name of the Secret with the public and private keys which should be used for TLS Client Authentication and the keys under which they are stored in the Secret. If such a Secret does not exist yet, prepare the keys in a file and create the Secret.

    On OpenShift this can be done using oc create:

    oc create secret generic my-secret --from-file=my-public.crt --from-file=my-private.key
  2. Edit the authentication property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      authentication:
        type: tls
        certificateAndKey:
          secretName: my-secret
          certificate: my-public.crt
          key: my-private.key
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.4.3. Configuring SCRAM-SHA-512 authentication in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • Username of the user which should be used for authentication

Procedure

  1. Find out the name of the Secret with the password which should be used for authentication and the key under which the password is stored in the Secret. If such a Secret does not exist yet, prepare a file with the password and create the Secret.

    On OpenShift this can be done using oc create:

    echo -n '1f2d1e2e67df' > my-password.txt
    oc create secret generic my-secret --from-file=my-password.txt
  2. Edit the authentication property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      authentication:
        type: scram-sha-512
        username: _my-username_
        passwordSecret:
          secretName: _my-secret_
          password: _my-password.txt_
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.5. Kafka Connect configuration

AMQ Streams allows you to customize the configuration of Apache Kafka Connect nodes by editing most of the options listed in Apache Kafka documentation.

The only options which cannot be configured are those related to the following areas:

  • Kafka cluster bootstrap address
  • Security (Encryption, Authentication, and Authorization)
  • Listener / REST interface configuration
  • Plugin path configuration

These options are automatically configured by AMQ Streams.

3.2.5.1. Kafka Connect configuration

Kafka Connect can be configured using the config property in KafkaConnect.spec and KafkaConnectS2I.spec. This property should contain the Kafka Connect configuration options as keys. The values could be in one of the following JSON types:

  • String
  • Number
  • Boolean

Users can specify and configure the options listed in the Apache Kafka documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

  • ssl.
  • sasl.
  • security.
  • listeners
  • plugin.path
  • rest.
  • bootstrap.servers

When one of the forbidden options is present in the config property, it will be ignored and a warning message will be printed to the Custer Operator log file. All other options will be passed to Kafka Connect.

Important

The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In such cases, the configuration in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config object should be fixed and the cluster operator will roll out the new configuration to all Kafka Connect nodes.

Selected options have default values:

  • group.id with default value connect-cluster
  • offset.storage.topic with default value connect-cluster-offsets
  • config.storage.topic with default value connect-cluster-configs
  • status.storage.topic with default value connect-cluster-status
  • key.converter with default value org.apache.kafka.connect.json.JsonConverter
  • value.converter with default value org.apache.kafka.connect.json.JsonConverter
  • internal.key.converter with default value org.apache.kafka.connect.json.JsonConverter
  • internal.value.converter with default value org.apache.kafka.connect.json.JsonConverter
  • internal.key.converter.schemas.enable with default value false
  • internal.value.converter.schemas.enable with default value false

These options will be automatically configured in case they are not present in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config properties.

An example showing Kafka Connect configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  config:
    group.id: my-connect-cluster
    offset.storage.topic: my-connect-cluster-offsets
    config.storage.topic: my-connect-cluster-configs
    status.storage.topic: my-connect-cluster-status
    key.converter: org.apache.kafka.connect.json.JsonConverter
    value.converter: org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable: true
    value.converter.schemas.enable: true
    internal.key.converter: org.apache.kafka.connect.json.JsonConverter
    internal.value.converter: org.apache.kafka.connect.json.JsonConverter
    internal.key.converter.schemas.enable: false
    internal.value.converter.schemas.enable: false
    config.storage.replication.factor: 3
    offset.storage.replication.factor: 3
    status.storage.replication.factor: 3
  # ...

3.2.5.2. Configuring Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the config property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      config:
        group.id: my-connect-cluster
        offset.storage.topic: my-connect-cluster-offsets
        config.storage.topic: my-connect-cluster-configs
        status.storage.topic: my-connect-cluster-status
        key.converter: org.apache.kafka.connect.json.JsonConverter
        value.converter: org.apache.kafka.connect.json.JsonConverter
        key.converter.schemas.enable: true
        value.converter.schemas.enable: true
        internal.key.converter: org.apache.kafka.connect.json.JsonConverter
        internal.value.converter: org.apache.kafka.connect.json.JsonConverter
        internal.key.converter.schemas.enable: false
        internal.value.converter.schemas.enable: false
        config.storage.replication.factor: 3
        offset.storage.replication.factor: 3
        status.storage.replication.factor: 3
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.6. CPU and memory resources

For every deployed container, AMQ Streams allows you to specify the resources which should be reserved for it and the maximum resources that can be consumed by it. AMQ Streams supports two types of resources:

  • Memory
  • CPU

AMQ Streams is using the OpenShift syntax for specifying CPU and memory resources.

3.2.6.1. Resource limits and requests

Resource limits and requests can be configured using the resources property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
3.2.6.1.1. Resource requests

Requests specify the resources that will be reserved for a given container. Reserving the resources will ensure that they are always available.

Important

If the resource request is for more than the available free resources in the OpenShift cluster, the pod will not be scheduled.

Resource requests can be specified in the request property. The resource requests currently supported by AMQ Streams are memory and CPU. Memory is specified under the property memory. CPU is specified under the property cpu.

An example showing resource request configuration

# ...
resources:
  requests:
    cpu: 12
    memory: 64Gi
# ...

It is also possible to specify a resource request just for one of the resources:

An example showing resource request configuration with memory request only

# ...
resources:
  requests:
    memory: 64Gi
# ...

Or:

An example showing resource request configuration with CPU request only

# ...
resources:
  requests:
    cpu: 12
# ...

3.2.6.1.2. Resource limits

Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not be always available. The container can use the resources up to the limit only when they are available. The resource limits should be always higher than the resource requests.

Resource limits can be specified in the limits property. The resource limits currently supported by AMQ Streams are memory and CPU. Memory is specified under the property memory. CPU is specified under the property cpu.

An example showing resource limits configuration

# ...
resources:
  limits:
    cpu: 12
    memory: 64Gi
# ...

It is also possible to specify the resource limit just for one of the resources:

An example showing resource limit configuration with memory request only

# ...
resources:
  limits:
    memory: 64Gi
# ...

Or:

An example showing resource limits configuration with CPU request only

# ...
resources:
  requests:
    cpu: 12
# ...

3.2.6.1.3. Supported CPU formats

CPU requests and limits are supported in the following formats:

  • Number of CPU cores as integer (5 CPU core) or decimal (2.5 CPU core).
  • Number or millicpus / millicores (100m) where 1000 millicores is the same 1 CPU core.

An example of using different CPU units

# ...
resources:
  requests:
    cpu: 500m
  limits:
    cpu: 2.5
# ...

Note

The amount of computing power of 1 CPU core might differ depending on the platform where the OpenShift is deployed.

For more details about the CPU specification, see the Meaning of CPU website.

3.2.6.1.4. Supported memory formats

Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.

  • To specify memory in megabytes, use the M suffix. For example 1000M.
  • To specify memory in gigabytes, use the G suffix. For example 1G.
  • To specify memory in mebibytes, use the Mi suffix. For example 1000Mi.
  • To specify memory in gibibytes, use the Gi suffix. For example 1Gi.

An example of using different memory units

# ...
resources:
  requests:
    memory: 512Mi
  limits:
    memory: 2Gi
# ...

For more details about the memory specification and additional supported units, see the Meaning of memory website.

3.2.6.1.5. Additional resources

3.2.6.2. Configuring resource requests and limits

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the resources property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        resources:
          requests:
            cpu: "8"
            memory: 64Gi
          limits:
            cpu: "12"
            memory: 128Gi
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

Additional resources

3.2.7. Logging

Logging enables you to diagnose error and performance issues of AMQ Streams. For the logging, various logger implementations are used. Kafka and Zookeeper use log4j logger and Topic Operator, User Operator, and other components use log4j2 logger.

This section provides information about different loggers and describes how to configure log levels.

You can set the log levels by specifying the loggers and their levels directly (inline) or by using a custom (external) config map.

3.2.7.1. Using inline logging setting

Procedure

  1. Edit the YAML file to specify the loggers and their level for the required components. For example:

    apiVersion: {KafkaApiVersion}
    kind: Kafka
    spec:
      kafka:
        # ...
        logging:
          type: inline
          loggers:
           logger.name: "INFO"
        # ...

    In the above example, the log level is set to INFO. You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. For more information about the log levels, see log4j manual.

  2. Create or update the Kafka resource in OpenShift.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.7.2. Using external ConfigMap for logging setting

Procedure

  1. Edit the YAML file to specify the name of the ConfigMap which should be used for the required components. For example:

    apiVersion: {KafkaApiVersion}
    kind: Kafka
    spec:
      kafka:
        # ...
        logging:
          type: external
          name: customConfigMap
        # ...

    Remember to place your custom ConfigMap under log4j.properties eventually log4j2.properties key.

  2. Create or update the Kafka resource in OpenShift.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.7.3. Loggers

AMQ Streams consists of several components. Each component has its own loggers and is configurable. This section provides information about loggers of various components.

Components and their loggers are listed below.

  • Kafka

    • kafka.root.logger.level
    • log4j.logger.org.I0Itec.zkclient.ZkClient
    • log4j.logger.org.apache.zookeeper
    • log4j.logger.kafka
    • log4j.logger.org.apache.kafka
    • log4j.logger.kafka.request.logger
    • log4j.logger.kafka.network.Processor
    • log4j.logger.kafka.server.KafkaApis
    • log4j.logger.kafka.network.RequestChannel$
    • log4j.logger.kafka.controller
    • log4j.logger.kafka.log.LogCleaner
    • log4j.logger.state.change.logger
    • log4j.logger.kafka.authorizer.logger
  • Zookeeper

    • zookeeper.root.logger
  • Kafka Connect and Kafka Connect with Source2Image support

    • connect.root.logger.level
    • log4j.logger.org.apache.zookeeper
    • log4j.logger.org.I0Itec.zkclient
    • log4j.logger.org.reflections
  • Kafka Mirror Maker

    • mirrormaker.root.logger
  • Topic Operator

    • rootLogger.level
  • User Operator

    • rootLogger.level

3.2.8. Healthchecks

Healthchecks are periodical tests which verify that the application’s health. When the Healthcheck fails, OpenShift can assume that the application is not healthy and attempt to fix it. OpenShift supports two types of Healthcheck probes:

  • Liveness probes
  • Readiness probes

For more details about the probes, see Configure Liveness and Readiness Probes. Both types of probes are used in AMQ Streams components.

Users can configure selected options for liveness and readiness probes

3.2.8.1. Healthcheck configurations

Liveness and readiness probes can be configured using the livenessProbe and readinessProbe properties in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

Both livenessProbe and readinessProbe support two additional options:

  • initialDelaySeconds
  • timeoutSeconds

The initialDelaySeconds property defines the initial delay before the probe is tried for the first time. Default is 15 seconds.

The timeoutSeconds property defines timeout of the probe. Default is 5 seconds.

An example of liveness and readiness probe configuration

# ...
readinessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
livenessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
# ...

3.2.8.2. Configuring healthchecks

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the livenessProbe or readinessProbe property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        readinessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.9. Prometheus metrics

AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and Zookeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404.

3.2.9.1. Metrics configuration

Prometheus metrics can be enabled by configuring the metrics property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

When the metrics property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}).

Example of enabling metrics without any further configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics: {}
    # ...
  zookeeper:
    # ...

The metrics property might contain additional configuration for the Prometheus JMX exporter.

Example of enabling metrics with additional Prometheus JMX Exporter configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics:
      lowercaseOutputName: true
      rules:
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count"
          name: "kafka_server_$1_$2_total"
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count"
          name: "kafka_server_$1_$2_total"
          labels:
            topic: "$3"
    # ...
  zookeeper:
    # ...

3.2.9.2. Configuring Prometheus metrics

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the metrics property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
        metrics:
          lowercaseOutputName: true
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.10. JVM Options

Apache Kafka and Apache Zookeeper are running inside of a Java Virtual Machine (JVM). JVM has many configuration options to optimize the performance for different platforms and architectures. AMQ Streams allows configuring some of these options.

3.2.10.1. JVM configuration

JVM options can be configured using the jvmOptions property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

Only a selected subset of available JVM options can be configured. The following options are supported:

-Xms and -Xmx

-Xms configures the minimum initial allocation heap size when the JVM starts. -Xmx configures the maximum heap size.

Note

The units accepted by JVM settings such as -Xmx and -Xms are those accepted by the JDK java binary in the corresponding image. Accordingly, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes

The default values used for -Xms and -Xmx depends on whether there is a memory request limit configured for the container:

  • If there is a memory limit then the JVM’s minimum and maximum memory will be set to a value corresponding to the limit.
  • If there is no memory limit then the JVM’s minimum memory will be set to 128M and the JVM’s maximum memory will not be defined. This allows for the JVM’s memory to grow as-needed, which is ideal for single node environments in test and development.
Important

Setting -Xmx explicitly requires some care:

  • The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by -Xmx.
  • If -Xmx is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it).
  • If -Xmx is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if -Xms is set to -Xmx, or some later time if not).

When setting -Xmx explicitly, it is recommended to:

  • set the memory request and the memory limit to the same value,
  • use a memory request that is at least 4.5 × the -Xmx,
  • consider setting -Xms to the same value as -Xms.
Important

Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.

Example fragment configuring -Xmx and -Xms

# ...
jvmOptions:
  "-Xmx": "2g"
  "-Xms": "2g"
# ...

In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.

Setting the same value for initial (-Xms) and maximum (-Xmx) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and Zookeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation will be multiplied by the number of consumers.

-server

-server enables the server JVM. This option can be set to true or false.

Example fragment configuring -server

# ...
jvmOptions:
  "-server": true
# ...

Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

-XX

-XX object can be used for configuring advanced runtime options of a JVM. The -server and -XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka.

Example showing the use of the -XX object

jvmOptions:
  "-XX":
    "UseG1GC": true,
    "MaxGCPauseMillis": 20,
    "InitiatingHeapOccupancyPercent": 35,
    "ExplicitGCInvokesConcurrent": true,
    "UseParNewGC": false

The example configuration above will result in the following JVM options:

-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

3.2.10.2. Configuring JVM options

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the jvmOptions property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        jvmOptions:
          "-Xmx": "8g"
          "-Xms": "8g"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.11. Container images

AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.

3.2.11.1. Container image configurations

Container image which should be used for given components can be specified using the image property in:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The image specified in the component-specific custom resource will be used during deployment. If the image field is missing, the image specified in the Cluster Operator configuration will be used. If the image name is not defined in the Cluster Operator configuration, then the default value will be used.

  • For Kafka brokers:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka:latest container image.
  • For Kafka broker TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-stunnel:latest container image.
  • For Zookeeper nodes:

    1. Container image specified in the STRIMZI_DEFAULT_ZOOKEEPER_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/zookeeper:latest container image.
  • For Zookeeper node TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/zookeeper-stunnel:latest container image.
  • For Topic Operator:

    1. Container image specified in the STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
  • For User Operator:

    1. Container image specified in the STRIMZI_DEFAULT_USER_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/user-operator:latest container image.
  • For Entity Operator TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/entity-operator-stunnel:latest container image.
  • For Kafka Connect:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-connect:latest container image.
  • For Kafka Connect with Source2image support:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-connect-s2i:latest container image.
Warning

Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.

Example of container image configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    image: my-org/my-image:latest
    # ...
  zookeeper:
    # ...

3.2.11.2. Configuring container images

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the image property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        image: my-org/my-image:latest
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.12. Configuring pod scheduling

Important

When two application are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.

3.2.12.1. Scheduling pods based on other applications

3.2.12.1.1. Avoid critical applications to share the node

Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.

3.2.12.1.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.2.12.1.3. Configuring pod anti-affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the affinity property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. The topologyKey should be set to kubernetes.io/hostname to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        affinity:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                    - key: application
                      operator: In
                      values:
                        - postgresql
                        - mongodb
                topologyKey: "kubernetes.io/hostname"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.12.2. Scheduling pods to specific nodes

3.2.12.2.1. Node scheduling

The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.

OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type or custom labels to select the right node.

3.2.12.2.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.2.12.2.3. Configuring node affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Label the nodes where AMQ Streams components should be scheduled.

    On OpenShift this can be done using oc label:

    oc label node your-node node-type=fast-network

    Alternatively, some of the existing labels might be reused.

  2. Edit the affinity property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
                - matchExpressions:
                  - key: node-type
                    operator: In
                    values:
                    - fast-network
        # ...
      zookeeper:
        # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.12.3. Using dedicated nodes

3.2.12.3.1. Dedicated nodes

Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.

Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.

To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.

3.2.12.3.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.2.12.3.3. Tolerations

Tolerations ca be configured using the tolerations property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The format of the tolerations property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.

3.2.12.3.4. Setting up dedicated nodes and scheduling pods on them

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Select the nodes which should be used as dedicated
  2. Make sure there are no workloads scheduled on these nodes
  3. Set the taints on the selected nodes

    On OpenShift this can be done using oc adm taint:

    oc adm taint node your-node dedicated=Kafka:NoSchedule
  4. Additionally, add a label to the selected nodes as well.

    On OpenShift this can be done using oc label:

    oc label node your-node dedicated=Kafka
  5. Edit the affinity and tolerations properties in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        tolerations:
          - key: "dedicated"
            operator: "Equal"
            value: "Kafka"
            effect: "NoSchedule"
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: dedicated
                  operator: In
                  values:
                  - Kafka
        # ...
      zookeeper:
        # ...
  6. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.13. List of resources created as part of Kafka Connect cluster

The following resources will created by the Cluster Operator in the OpenShift cluster:

connect-cluster-name-connect
Deployment which is in charge to create the Kafka Connect worker node pods.
connect-cluster-name-connect-api
Service which exposes the REST interface for managing the Kafka Connect cluster.
connect-cluster-name-config
ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka broker pods.

3.3. Kafka Connect cluster with Source2Image support

The full schema of the KafkaConnectS2I resource is described in the Section B.37, “KafkaConnectS2I schema reference”. All labels that are applied to the desired KafkaConnectS2I resource will also be applied to the OpenShift resources making up the Kafka Connect cluster with Source2Image support. This provides a convenient mechanism for those resources to be labelled in whatever way the user requires.

3.3.1. Replicas

Kafka Connect clusters can run with a different number of nodes. The number of nodes is defined in the KafkaConnect and KafkaConnectS2I resources. Running Kafka Connect cluster with multiple nodes can provide better availability and scalability. However, when running Kafka Connect on OpenShift it is not absolutely necessary to run multiple nodes of Kafka Connect for high availability. When the node where Kafka Connect is deployed to crashes, OpenShift will automatically take care of rescheduling the Kafka Connect pod to a different node. However, running Kafka Connect with multiple nodes can provide faster failover times, because the other nodes will be already up and running.

3.3.1.1. Configuring the number of nodes

Number of Kafka Connect nodes can be configured using the replicas property in KafkaConnect.spec and KafkaConnectS2I.spec.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the replicas property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnectS2I
    metadata:
      name: my-cluster
    spec:
      # ...
      replicas: 3
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.2. Bootstrap servers

Kafka Connect cluster always works together with a Kafka cluster. The Kafka cluster is specified in the form of a list of bootstrap servers. On OpenShift, the list must ideally contain the Kafka cluster bootstrap service which is named cluster-name-kafka-bootstrap and a port of 9092 for plain traffic or 9093 for encrypted traffic.

The list of bootstrap servers can be configured in the bootstrapServers property in KafkaConnect.spec and KafkaConnectS2I.spec. The servers should be a comma-separated list containing one or more Kafka brokers or a service pointing to Kafka brokers specified as a hostname:_port_ pairs.

When using Kafka Connect with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of a given cluster.

3.3.2.1. Configuring bootstrap servers

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the bootstrapServers property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnect
    metadata:
      name: my-cluster
    spec:
      # ...
      bootstrapServers: my-cluster-kafka-bootstrap:9092
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.3. Connecting to Kafka brokers using TLS

By default, Kafka Connect will try to connect to Kafka brokers using a plain text connection. If you would prefer to use TLS additional configuration will be necessary.

3.3.3.1. TLS support in Kafka Connect

TLS support is configured in the tls property in KafkaConnect.spec and KafkaConnectS2I.spec. The tls property contains a list of secrets with key names under which the certificates are stored. The certificates should be stored in X509 format.

An example showing TLS configuration with multiple certificates

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  tls:
    trustedCertificates:
      - secretName: my-secret
        certificate: ca.crt
      - secretName: my-other-secret
        certificate: certificate.crt
  # ...

When multiple certificates are stored in the same secret, it can be listed multiple times.

An example showing TLS configuration with multiple certificates from the same secret

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnectS2I
metadata:
  name: my-cluster
spec:
  # ...
  tls:
    trustedCertificates:
      - secretName: my-secret
        certificate: ca.crt
      - secretName: my-secret
        certificate: ca2.crt
  # ...

3.3.3.2. Configuring TLS in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Find out the name of the secret with the certificate which should be used for TLS Server Authentication and the key under which the certificate is stored in the secret. If such secret does not exist yet, prepare the certificate in a file and create the secret.

    On OpenShift this can be done using oc create:

    oc create secret generic my-secret --from-file=my-file.crt
  2. Edit the tls property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      tls:
        trustedCertificates:
          - secretName: my-cluster-cluster-cert
            certificate: ca.crt
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.4. Connecting to Kafka brokers with Authentication

By default, Kafka Connect will try to connect to Kafka brokers without any authentication. Authentication can be enabled in the KafkaConnect and KafkaConnectS2I resources.

3.3.4.1. Authentication support in Kafka Connect

Authentication can be configured in the authentication property in KafkaConnect.spec and KafkaConnectS2I.spec. The authentication property specifies the type of the authentication mechanisms which should be used and additional configuration details depending on the mechanism. The currently supported authentication types are:

  • TLS client authentication
  • SASL based authentication using SCRAM-SHA-512 mechanism
3.3.4.1.1. TLS Client Authentication

To use the TLS client authentication, set the type property to the value tls. TLS client authentication is using TLS certificate to authenticate. The certificate has to be specified in the certificateAndKey property. It is always loaded from an OpenShift secret. Inside the secret, it has to be stored in the X509 format under two different keys: for public and private keys.

Note

TLS client authentication can be used only with TLS connections. For more details about TLS configuration in Kafka Connect see Section 3.3.3, “Connecting to Kafka brokers using TLS”.

An example showing TLS client authentication configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  authentication:
    type: tls
    certificateAndKey:
      secretName: my-secret
      certificate: public.crt
      key: private.key
  # ...

3.3.4.1.2. SCRAM-SHA-512 authentication

To use the authentication using the SCRAM-SHA-512 SASL mechanism, set the type property to the value scram-sha-512. SCRAM-SHA-512 uses a username and password to authenticate. Specify the username in the username property. Specify the password as a link to a Secret containing the password in the passwordSecret property. It has to specify the name of the Secret containing the password and the name of the key under which the password is stored inside the Secret.

An example showing SCRAM-SHA-512 client authentication configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  authentication:
    type: scram-sha-512
    username: my-connect-user
    passwordSecret:
      secretName: my-connect-user
      password: password
  # ...

3.3.4.2. Configuring TLS client authentication in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Find out the name of the Secret with the public and private keys which should be used for TLS Client Authentication and the keys under which they are stored in the Secret. If such a Secret does not exist yet, prepare the keys in a file and create the Secret.

    On OpenShift this can be done using oc create:

    oc create secret generic my-secret --from-file=my-public.crt --from-file=my-private.key
  2. Edit the authentication property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      authentication:
        type: tls
        certificateAndKey:
          secretName: my-secret
          certificate: my-public.crt
          key: my-private.key
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.4.3. Configuring SCRAM-SHA-512 authentication in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • Username of the user which should be used for authentication

Procedure

  1. Find out the name of the Secret with the password which should be used for authentication and the key under which the password is stored in the Secret. If such a Secret does not exist yet, prepare a file with the password and create the Secret.

    On OpenShift this can be done using oc create:

    echo -n '1f2d1e2e67df' > my-password.txt
    oc create secret generic my-secret --from-file=my-password.txt
  2. Edit the authentication property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      authentication:
        type: scram-sha-512
        username: _my-username_
        passwordSecret:
          secretName: _my-secret_
          password: _my-password.txt_
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.5. Kafka Connect configuration

AMQ Streams allows you to customize the configuration of Apache Kafka Connect nodes by editing most of the options listed in Apache Kafka documentation.

The only options which cannot be configured are those related to the following areas:

  • Kafka cluster bootstrap address
  • Security (Encryption, Authentication, and Authorization)
  • Listener / REST interface configuration
  • Plugin path configuration

These options are automatically configured by AMQ Streams.

3.3.5.1. Kafka Connect configuration

Kafka Connect can be configured using the config property in KafkaConnect.spec and KafkaConnectS2I.spec. This property should contain the Kafka Connect configuration options as keys. The values could be in one of the following JSON types:

  • String
  • Number
  • Boolean

Users can specify and configure the options listed in the Apache Kafka documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

  • ssl.
  • sasl.
  • security.
  • listeners
  • plugin.path
  • rest.
  • bootstrap.servers

When one of the forbidden options is present in the config property, it will be ignored and a warning message will be printed to the Custer Operator log file. All other options will be passed to Kafka Connect.

Important

The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In such cases, the configuration in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config object should be fixed and the cluster operator will roll out the new configuration to all Kafka Connect nodes.

Selected options have default values:

  • group.id with default value connect-cluster
  • offset.storage.topic with default value connect-cluster-offsets
  • config.storage.topic with default value connect-cluster-configs
  • status.storage.topic with default value connect-cluster-status
  • key.converter with default value org.apache.kafka.connect.json.JsonConverter
  • value.converter with default value org.apache.kafka.connect.json.JsonConverter
  • internal.key.converter with default value org.apache.kafka.connect.json.JsonConverter
  • internal.value.converter with default value org.apache.kafka.connect.json.JsonConverter
  • internal.key.converter.schemas.enable with default value false
  • internal.value.converter.schemas.enable with default value false

These options will be automatically configured in case they are not present in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config properties.

An example showing Kafka Connect configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  config:
    group.id: my-connect-cluster
    offset.storage.topic: my-connect-cluster-offsets
    config.storage.topic: my-connect-cluster-configs
    status.storage.topic: my-connect-cluster-status
    key.converter: org.apache.kafka.connect.json.JsonConverter
    value.converter: org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable: true
    value.converter.schemas.enable: true
    internal.key.converter: org.apache.kafka.connect.json.JsonConverter
    internal.value.converter: org.apache.kafka.connect.json.JsonConverter
    internal.key.converter.schemas.enable: false
    internal.value.converter.schemas.enable: false
    config.storage.replication.factor: 3
    offset.storage.replication.factor: 3
    status.storage.replication.factor: 3
  # ...

3.3.5.2. Configuring Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the config property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      config:
        group.id: my-connect-cluster
        offset.storage.topic: my-connect-cluster-offsets
        config.storage.topic: my-connect-cluster-configs
        status.storage.topic: my-connect-cluster-status
        key.converter: org.apache.kafka.connect.json.JsonConverter
        value.converter: org.apache.kafka.connect.json.JsonConverter
        key.converter.schemas.enable: true
        value.converter.schemas.enable: true
        internal.key.converter: org.apache.kafka.connect.json.JsonConverter
        internal.value.converter: org.apache.kafka.connect.json.JsonConverter
        internal.key.converter.schemas.enable: false
        internal.value.converter.schemas.enable: false
        config.storage.replication.factor: 3
        offset.storage.replication.factor: 3
        status.storage.replication.factor: 3
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.6. CPU and memory resources

For every deployed container, AMQ Streams allows you to specify the resources which should be reserved for it and the maximum resources that can be consumed by it. AMQ Streams supports two types of resources:

  • Memory
  • CPU

AMQ Streams is using the OpenShift syntax for specifying CPU and memory resources.

3.3.6.1. Resource limits and requests

Resource limits and requests can be configured using the resources property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
3.3.6.1.1. Resource requests

Requests specify the resources that will be reserved for a given container. Reserving the resources will ensure that they are always available.

Important

If the resource request is for more than the available free resources in the OpenShift cluster, the pod will not be scheduled.

Resource requests can be specified in the request property. The resource requests currently supported by AMQ Streams are memory and CPU. Memory is specified under the property memory. CPU is specified under the property cpu.

An example showing resource request configuration

# ...
resources:
  requests:
    cpu: 12
    memory: 64Gi
# ...

It is also possible to specify a resource request just for one of the resources:

An example showing resource request configuration with memory request only

# ...
resources:
  requests:
    memory: 64Gi
# ...

Or:

An example showing resource request configuration with CPU request only

# ...
resources:
  requests:
    cpu: 12
# ...

3.3.6.1.2. Resource limits

Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not be always available. The container can use the resources up to the limit only when they are available. The resource limits should be always higher than the resource requests.

Resource limits can be specified in the limits property. The resource limits currently supported by AMQ Streams are memory and CPU. Memory is specified under the property memory. CPU is specified under the property cpu.

An example showing resource limits configuration

# ...
resources:
  limits:
    cpu: 12
    memory: 64Gi
# ...

It is also possible to specify the resource limit just for one of the resources:

An example showing resource limit configuration with memory request only

# ...
resources:
  limits:
    memory: 64Gi
# ...

Or:

An example showing resource limits configuration with CPU request only

# ...
resources:
  requests:
    cpu: 12
# ...

3.3.6.1.3. Supported CPU formats

CPU requests and limits are supported in the following formats:

  • Number of CPU cores as integer (5 CPU core) or decimal (2.5 CPU core).
  • Number or millicpus / millicores (100m) where 1000 millicores is the same 1 CPU core.

An example of using different CPU units

# ...
resources:
  requests:
    cpu: 500m
  limits:
    cpu: 2.5
# ...

Note

The amount of computing power of 1 CPU core might differ depending on the platform where the OpenShift is deployed.

For more details about the CPU specification, see the Meaning of CPU website.

3.3.6.1.4. Supported memory formats

Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.

  • To specify memory in megabytes, use the M suffix. For example 1000M.
  • To specify memory in gigabytes, use the G suffix. For example 1G.
  • To specify memory in mebibytes, use the Mi suffix. For example 1000Mi.
  • To specify memory in gibibytes, use the Gi suffix. For example 1Gi.

An example of using different memory units

# ...
resources:
  requests:
    memory: 512Mi
  limits:
    memory: 2Gi
# ...

For more details about the memory specification and additional supported units, see the Meaning of memory website.

3.3.6.1.5. Additional resources

3.3.6.2. Configuring resource requests and limits

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the resources property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        resources:
          requests:
            cpu: "8"
            memory: 64Gi
          limits:
            cpu: "12"
            memory: 128Gi
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

Additional resources

3.3.7. Logging

Logging enables you to diagnose error and performance issues of AMQ Streams. For the logging, various logger implementations are used. Kafka and Zookeeper use log4j logger and Topic Operator, User Operator, and other components use log4j2 logger.

This section provides information about different loggers and describes how to configure log levels.

You can set the log levels by specifying the loggers and their levels directly (inline) or by using a custom (external) config map.

3.3.7.1. Using inline logging setting

Procedure

  1. Edit the YAML file to specify the loggers and their level for the required components. For example:

    apiVersion: {KafkaApiVersion}
    kind: Kafka
    spec:
      kafka:
        # ...
        logging:
          type: inline
          loggers:
           logger.name: "INFO"
        # ...

    In the above example, the log level is set to INFO. You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. For more information about the log levels, see log4j manual.

  2. Create or update the Kafka resource in OpenShift.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.7.2. Using external ConfigMap for logging setting

Procedure

  1. Edit the YAML file to specify the name of the ConfigMap which should be used for the required components. For example:

    apiVersion: {KafkaApiVersion}
    kind: Kafka
    spec:
      kafka:
        # ...
        logging:
          type: external
          name: customConfigMap
        # ...

    Remember to place your custom ConfigMap under log4j.properties eventually log4j2.properties key.

  2. Create or update the Kafka resource in OpenShift.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.7.3. Loggers

AMQ Streams consists of several components. Each component has its own loggers and is configurable. This section provides information about loggers of various components.

Components and their loggers are listed below.

  • Kafka

    • kafka.root.logger.level
    • log4j.logger.org.I0Itec.zkclient.ZkClient
    • log4j.logger.org.apache.zookeeper
    • log4j.logger.kafka
    • log4j.logger.org.apache.kafka
    • log4j.logger.kafka.request.logger
    • log4j.logger.kafka.network.Processor
    • log4j.logger.kafka.server.KafkaApis
    • log4j.logger.kafka.network.RequestChannel$
    • log4j.logger.kafka.controller
    • log4j.logger.kafka.log.LogCleaner
    • log4j.logger.state.change.logger
    • log4j.logger.kafka.authorizer.logger
  • Zookeeper

    • zookeeper.root.logger
  • Kafka Connect and Kafka Connect with Source2Image support

    • connect.root.logger.level
    • log4j.logger.org.apache.zookeeper
    • log4j.logger.org.I0Itec.zkclient
    • log4j.logger.org.reflections
  • Kafka Mirror Maker

    • mirrormaker.root.logger
  • Topic Operator

    • rootLogger.level
  • User Operator

    • rootLogger.level

3.3.8. Healthchecks

Healthchecks are periodical tests which verify that the application’s health. When the Healthcheck fails, OpenShift can assume that the application is not healthy and attempt to fix it. OpenShift supports two types of Healthcheck probes:

  • Liveness probes
  • Readiness probes

For more details about the probes, see Configure Liveness and Readiness Probes. Both types of probes are used in AMQ Streams components.

Users can configure selected options for liveness and readiness probes

3.3.8.1. Healthcheck configurations

Liveness and readiness probes can be configured using the livenessProbe and readinessProbe properties in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

Both livenessProbe and readinessProbe support two additional options:

  • initialDelaySeconds
  • timeoutSeconds

The initialDelaySeconds property defines the initial delay before the probe is tried for the first time. Default is 15 seconds.

The timeoutSeconds property defines timeout of the probe. Default is 5 seconds.

An example of liveness and readiness probe configuration

# ...
readinessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
livenessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
# ...

3.3.8.2. Configuring healthchecks

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the livenessProbe or readinessProbe property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        readinessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.9. Prometheus metrics

AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and Zookeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404.

3.3.9.1. Metrics configuration

Prometheus metrics can be enabled by configuring the metrics property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

When the metrics property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}).

Example of enabling metrics without any further configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics: {}
    # ...
  zookeeper:
    # ...

The metrics property might contain additional configuration for the Prometheus JMX exporter.

Example of enabling metrics with additional Prometheus JMX Exporter configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics:
      lowercaseOutputName: true
      rules:
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count"
          name: "kafka_server_$1_$2_total"
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count"
          name: "kafka_server_$1_$2_total"
          labels:
            topic: "$3"
    # ...
  zookeeper:
    # ...

3.3.9.2. Configuring Prometheus metrics

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the metrics property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
        metrics:
          lowercaseOutputName: true
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.10. JVM Options

Apache Kafka and Apache Zookeeper are running inside of a Java Virtual Machine (JVM). JVM has many configuration options to optimize the performance for different platforms and architectures. AMQ Streams allows configuring some of these options.

3.3.10.1. JVM configuration

JVM options can be configured using the jvmOptions property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

Only a selected subset of available JVM options can be configured. The following options are supported:

-Xms and -Xmx

-Xms configures the minimum initial allocation heap size when the JVM starts. -Xmx configures the maximum heap size.

Note

The units accepted by JVM settings such as -Xmx and -Xms are those accepted by the JDK java binary in the corresponding image. Accordingly, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes

The default values used for -Xms and -Xmx depends on whether there is a memory request limit configured for the container:

  • If there is a memory limit then the JVM’s minimum and maximum memory will be set to a value corresponding to the limit.
  • If there is no memory limit then the JVM’s minimum memory will be set to 128M and the JVM’s maximum memory will not be defined. This allows for the JVM’s memory to grow as-needed, which is ideal for single node environments in test and development.
Important

Setting -Xmx explicitly requires some care:

  • The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by -Xmx.
  • If -Xmx is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it).
  • If -Xmx is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if -Xms is set to -Xmx, or some later time if not).

When setting -Xmx explicitly, it is recommended to:

  • set the memory request and the memory limit to the same value,
  • use a memory request that is at least 4.5 × the -Xmx,
  • consider setting -Xms to the same value as -Xms.
Important

Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.

Example fragment configuring -Xmx and -Xms

# ...
jvmOptions:
  "-Xmx": "2g"
  "-Xms": "2g"
# ...

In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.

Setting the same value for initial (-Xms) and maximum (-Xmx) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and Zookeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation will be multiplied by the number of consumers.

-server

-server enables the server JVM. This option can be set to true or false.

Example fragment configuring -server

# ...
jvmOptions:
  "-server": true
# ...

Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

-XX

-XX object can be used for configuring advanced runtime options of a JVM. The -server and -XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka.

Example showing the use of the -XX object

jvmOptions:
  "-XX":
    "UseG1GC": true,
    "MaxGCPauseMillis": 20,
    "InitiatingHeapOccupancyPercent": 35,
    "ExplicitGCInvokesConcurrent": true,
    "UseParNewGC": false

The example configuration above will result in the following JVM options:

-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

3.3.10.2. Configuring JVM options

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the jvmOptions property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        jvmOptions:
          "-Xmx": "8g"
          "-Xms": "8g"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.11. Container images

AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.

3.3.11.1. Container image configurations

Container image which should be used for given components can be specified using the image property in:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The image specified in the component-specific custom resource will be used during deployment. If the image field is missing, the image specified in the Cluster Operator configuration will be used. If the image name is not defined in the Cluster Operator configuration, then the default value will be used.

  • For Kafka brokers:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka:latest container image.
  • For Kafka broker TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-stunnel:latest container image.
  • For Zookeeper nodes:

    1. Container image specified in the STRIMZI_DEFAULT_ZOOKEEPER_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/zookeeper:latest container image.
  • For Zookeeper node TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/zookeeper-stunnel:latest container image.
  • For Topic Operator:

    1. Container image specified in the STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
  • For User Operator:

    1. Container image specified in the STRIMZI_DEFAULT_USER_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/user-operator:latest container image.
  • For Entity Operator TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/entity-operator-stunnel:latest container image.
  • For Kafka Connect:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-connect:latest container image.
  • For Kafka Connect with Source2image support:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-connect-s2i:latest container image.
Warning

Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.

Example of container image configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    image: my-org/my-image:latest
    # ...
  zookeeper:
    # ...

3.3.11.2. Configuring container images

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the image property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        image: my-org/my-image:latest
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.12. Configuring pod scheduling

Important

When two application are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.

3.3.12.1. Scheduling pods based on other applications

3.3.12.1.1. Avoid critical applications to share the node

Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.

3.3.12.1.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.3.12.1.3. Configuring pod anti-affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the affinity property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. The topologyKey should be set to kubernetes.io/hostname to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        affinity:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                    - key: application
                      operator: In
                      values:
                        - postgresql
                        - mongodb
                topologyKey: "kubernetes.io/hostname"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.12.2. Scheduling pods to specific nodes

3.3.12.2.1. Node scheduling

The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.

OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type or custom labels to select the right node.

3.3.12.2.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.3.12.2.3. Configuring node affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Label the nodes where AMQ Streams components should be scheduled.

    On OpenShift this can be done using oc label:

    oc label node your-node node-type=fast-network

    Alternatively, some of the existing labels might be reused.

  2. Edit the affinity property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
                - matchExpressions:
                  - key: node-type
                    operator: In
                    values:
                    - fast-network
        # ...
      zookeeper:
        # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.12.3. Using dedicated nodes

3.3.12.3.1. Dedicated nodes

Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.

Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.

To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.

3.3.12.3.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.3.12.3.3. Tolerations

Tolerations ca be configured using the tolerations property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The format of the tolerations property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.

3.3.12.3.4. Setting up dedicated nodes and scheduling pods on them

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Select the nodes which should be used as dedicated
  2. Make sure there are no workloads scheduled on these nodes
  3. Set the taints on the selected nodes

    On OpenShift this can be done using oc adm taint:

    oc adm taint node your-node dedicated=Kafka:NoSchedule
  4. Additionally, add a label to the selected nodes as well.

    On OpenShift this can be done using oc label:

    oc label node your-node dedicated=Kafka
  5. Edit the affinity and tolerations properties in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        tolerations:
          - key: "dedicated"
            operator: "Equal"
            value: "Kafka"
            effect: "NoSchedule"
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: dedicated
                  operator: In
                  values:
                  - Kafka
        # ...
      zookeeper:
        # ...
  6. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.13. List of resources created as part of Kafka Connect cluster with Source2Image support

The following resources will created by the Cluster Operator in the OpenShift cluster:

connect-cluster-name-connect-source
ImageStream which is used as the base image for the newly-built Docker images.
connect-cluster-name-connect
BuildConfig which is responsible for building the new Kafka Connect Docker images.
connect-cluster-name-connect
ImageStream where the newly built Docker images will be pushed.
connect-cluster-name-connect
DeploymentConfig which is in charge of creating the Kafka Connect worker node pods.
connect-cluster-name-connect-api
Service which exposes the REST interface for managing the Kafka Connect cluster.
connect-cluster-name-config
ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka broker pods.

3.3.14. Using OpenShift builds and S2I to create new images

OpenShift supports builds, which can be used together with the Source-to-Image (S2I) framework to create new container images. An OpenShift build takes a builder image with S2I support together with source code and binaries provided by the user and uses them to build a new container image. The newly created container image is stored in OpenShift’s local container image repository and can be used in deployments. AMQ Streams provides a Kafka Connect builder image, which can be found on Red Hat Container Catalog as registry.access.redhat.com/amqstreams-1/amqstreams10-kafkaconnects2i-openshift:1.0.0 with this S2I support. It takes user-provided binaries (with plugins and connectors) and creates a new Kafka Connect image. This enhanced Kafka Connect image can be used with the Kafka Connect deployment.

The S2I deployment provided as an OpenShift template. It can be deployed from the template using the command-line or the OpenShift console.

Procedure

  1. Create a Kafka Connect S2I cluster from the command-line

    oc apply -f examples/kafka-connect/kafka-connect-s2i.yaml
  2. Once the cluster is deployed, a new build can be triggered from the command-line by creating a directory with Kafka Connect plugins:

    $ tree ./my-plugins/
    ./my-plugins/
    ├── debezium-connector-mongodb
    │   ├── bson-3.4.2.jar
    │   ├── CHANGELOG.md
    │   ├── CONTRIBUTE.md
    │   ├── COPYRIGHT.txt
    │   ├── debezium-connector-mongodb-0.7.1.jar
    │   ├── debezium-core-0.7.1.jar
    │   ├── LICENSE.txt
    │   ├── mongodb-driver-3.4.2.jar
    │   ├── mongodb-driver-core-3.4.2.jar
    │   └── README.md
    ├── debezium-connector-mysql
    │   ├── CHANGELOG.md
    │   ├── CONTRIBUTE.md
    │   ├── COPYRIGHT.txt
    │   ├── debezium-connector-mysql-0.7.1.jar
    │   ├── debezium-core-0.7.1.jar
    │   ├── LICENSE.txt
    │   ├── mysql-binlog-connector-java-0.13.0.jar
    │   ├── mysql-connector-java-5.1.40.jar
    │   ├── README.md
    │   └── wkb-1.0.2.jar
    └── debezium-connector-postgres
        ├── CHANGELOG.md
        ├── CONTRIBUTE.md
        ├── COPYRIGHT.txt
        ├── debezium-connector-postgres-0.7.1.jar
        ├── debezium-core-0.7.1.jar
        ├── LICENSE.txt
        ├── postgresql-42.0.0.jar
        ├── protobuf-java-2.6.1.jar
        └── README.md
  3. Start a new image build using the prepared directory:

    oc start-build my-connect-cluster-connect --from-dir ./my-plugins/
    Note

    The name of the build will be changed according to the cluster name of the deployed Kafka Connect cluster.

  4. Once the build is finished, the new image will be used automatically by the Kafka Connect deployment.

3.4. Kafka Mirror Maker configuration

The full schema of the KafkaMirrorMaker resource is described in the Section B.50, “KafkaMirrorMaker schema reference”. All labels that apply to the desired KafkaMirrorMaker resource will also be applied to the OpenShift resources making up Mirror Maker. This provides a convenient mechanism for those resources to be labelled in whatever way the user requires.

3.4.1. Replicas

It is possible to run multiple Mirror Maker replicas. The number of replicas is defined in the KafkaMirrorMaker resource. You can run multiple Mirror Maker replicas to provide better availability and scalability. However, when running Kafka Mirror Maker on OpenShift it is not absolutely necessary to run multiple replicas of the Kafka Mirror Maker for high availability. When the node where the Kafka Mirror Maker has deployed crashes, OpenShift will automatically reschedule the Kafka Mirror Maker pod to a different node. However, running Kafka Mirror Maker with multiple replicas can provide faster failover times as the other nodes will be up and running.

3.4.1.1. Configuring the number of replicas

The number of Kafka Mirror Maker replicas can be configured using the replicas property in KafkaMirrorMaker.spec.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the replicas property in the KafkaMirrorMaker resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaMirrorMaker
    metadata:
      name: my-mirror-maker
    spec:
      # ...
      replicas: 3
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f <your-file>

3.4.2. Bootstrap servers

Kafka Mirror Maker always works together with two Kafka clusters (source and target). The source and the target Kafka clusters are specified in the form of two lists of comma-separated list of <hostname>:‍<port> pairs. The bootstrap server lists can refer to Kafka clusters which do not need to be deployed in the same OpenShift cluster. They can even refer to any Kafka cluster not deployed by AMQ Streams or even deployed by AMQ Streams but on a different OpenShift cluster and accessible from outside.

If on the same OpenShift cluster, each list must ideally contain the Kafka cluster bootstrap service which is named <cluster-name>-kafka-bootstrap and a port of 9092 for plain traffic or 9093 for encrypted traffic. If deployed by AMQ Streams but on different OpenShift clusters, the list content depends on the way used for exposing the clusters (routes, nodeports or loadbalancers).

The list of bootstrap servers can be configured in the KafkaMirrorMaker.spec.consumer.bootstrapServers and KafkaMirrorMaker.spec.producer.bootstrapServers properties. The servers should be a comma-separated list containing one or more Kafka brokers or a Service pointing to Kafka brokers specified as a <hostname>:<port> pairs.

When using Kafka Mirror Maker with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the given cluster.

3.4.2.1. Configuring bootstrap servers

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the KafkaMirrorMaker.spec.consumer.bootstrapServers and KafkaMirrorMaker.spec.producer.bootstrapServers properties. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaMirrorMaker
    metadata:
      name: my-mirror-maker
    spec:
      # ...
      consumer:
        bootstrapServers: my-source-cluster-kafka-bootstrap:9092
      # ...
      producer:
        bootstrapServers: my-target-cluster-kafka-bootstrap:9092
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f <your-file>

3.4.3. Whitelist

You specify the list topics that the Kafka Mirror Maker has to mirror from the source to the target Kafka cluster in the KafkaMirrorMaker resource using the whitelist option. It allows any regular expression from the simplest case with a single topic name to complex patterns. For example, you can mirror topics A and B using "A|B" or all topics using "*". You can also pass multiple regular expressions separated by commas to the Kafka Mirror Maker.

3.4.3.1. Configuring the topics whitelist

Specify the list topics that have to be mirrored by the Kafka Mirror Maker from source to target Kafka cluster using the whitelist property in KafkaMirrorMaker.spec.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the whitelist property in the KafkaMirrorMaker resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaMirrorMaker
    metadata:
      name: my-mirror-maker
    spec:
      # ...
      whitelist: "my-topic|other-topic"
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f <your-file>

3.4.4. Consumer group identifier

The Kafka Mirror Maker uses Kafka consumer to consume messages and it behaves like any other Kafka consumer client. It is in charge to consume the messages from the source Kafka cluster which will be mirrored to the target Kafka cluster. The consumer needs to be part of a consumer group for being assigned partitions.

3.4.4.1. Configuring the consumer group identifier

The consumer group identifier can be configured in the KafkaMirrorMaker.spec.consumer.groupId property.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the KafkaMirrorMaker.spec.consumer.groupId property. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaMirrorMaker
    metadata:
      name: my-mirror-maker
    spec:
      # ...
      consumer:
        groupId: "my-group"
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f <your-file>

3.4.5. Number of consumer streams

You can increase the throughput in mirroring topics by increase the number of consumer threads. More consumer threads will belong to the same configured consumer group. The topic partitions will be assigned across these consumer threads which will consume messages in parallel.

3.4.5.1. Configuring the number of consumer streams

The number of consumer streams can be configured using the KafkaMirrorMaker.spec.consumer.numStreams property.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the KafkaMirrorMaker.spec.consumer.numStreams property. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaMirrorMaker
    metadata:
      name: my-mirror-maker
    spec:
      # ...
      consumer:
        numStreams: 2
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f <your-file>

3.4.6. Connecting to Kafka brokers using TLS

By default, Kafka Mirror Maker will try to connect to Kafka brokers, in the source and target clusters, using a plain text connection. You must make additional configurations to use TLS.

3.4.6.1. TLS support in Kafka Mirror Maker

TLS support is configured in the tls sub-property of consumer and producer properties in KafkaMirrorMaker.spec. The tls property contains a list of secrets with key names under which the certificates are stored. The certificates should be stored in X.509 format.

An example showing TLS configuration with multiple certificates

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaMirrorMaker
metadata:
  name: my-mirror-maker
spec:
  # ...
  consumer:
    tls:
      trustedCertificates:
        - secretName: my-source-secret
          certificate: ca.crt
        - secretName: my-other-source-secret
          certificate: certificate.crt
  # ...
  producer:
    tls:
      trustedCertificates:
        - secretName: my-target-secret
          certificate: ca.crt
        - secretName: my-other-target-secret
          certificate: certificate.crt
  # ...

When multiple certificates are stored in the same secret, it can be listed multiple times.

An example showing TLS configuration with multiple certificates from the same secret

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaMirrorMaker
metadata:
  name: my-mirror-maker
spec:
  # ...
  consumer:
    tls:
      trustedCertificates:
        - secretName: my-source-secret
          certificate: ca.crt
        - secretName: my-source-secret
          certificate: ca2.crt
  # ...
  producer:
    tls:
      trustedCertificates:
        - secretName: my-target-secret
          certificate: ca.crt
        - secretName: my-target-secret
          certificate: ca2.crt
  # ...

3.4.6.2. Configuring TLS encryption in Kafka Mirror Maker

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

As the Kafka Mirror Maker connects to two Kafka clusters (source and target), you can choose to configure TLS for one or both the clusters. The following steps describe how to configure TLS on the consumer side for connecting to the source Kafka cluster:

  1. Find out the name of the secret with the certificate which should be used for TLS Server Authentication and the key under which the certificate is stored in the secret. If such secret does not exist yet, prepare the certificate in a file and create the secret.

    On OpenShift this can be done using oc create:

    oc create secret generic <my-secret> --from-file=<my-file.crt>
  2. Edit the KafkaMirrorMaker.spec.consumer.tls property. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaMirrorMaker
    metadata:
      name: my-mirror-maker
    spec:
      # ...
      consumer:
        tls:
          trustedCertificates:
            - secretName: my-cluster-cluster-cert
              certificate: ca.crt
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f <your-file>

Repeat the above steps for configuring TLS on the target Kafka cluster. In this case, the secret containing the certificate has to be configured in the KafkaMirrorMaker.spec.producer.tls property.

3.4.7. Connecting to Kafka brokers with Authentication

By default, Kafka Mirror Maker will try to connect to Kafka brokers without any authentication. Authentication can be enabled in the KafkaMirrorMaker resource.

3.4.7.1. Authentication support in Kafka Mirror Maker

Authentication can be configured in the KafkaMirrorMaker.spec.consumer.authentication and KafkaMirrorMaker.spec.producer.authentication properties. The authentication property specifies the type of the authentication method which should be used and additional configuration details depending on the mechanism. The currently supported authentication types are:

  • TLS client authentication
  • SASL based authentication using SCRAM-SHA-512 mechanism
3.4.7.1.1. TLS Client Authentication

To use the TLS client authentication, set the type property to the value tls. The TLS client authentication uses TLS certificate to authenticate. The certificate has to be specified in the certificateAndKey property. It is always loaded from an OpenShift secret. Inside the secret, it has to be stored in the X.509 format separately as public and private keys.

Note

TLS client authentication can be used only with TLS connections. For more details about TLS configuration in Kafka Mirror Maker see Section 3.4.6, “Connecting to Kafka brokers using TLS”.

An example showing TLS client authentication configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaMirrorMaker
metadata:
  name: my-mirror-maker
spec:
  # ...
  consumer:
    authentication:
      type: tls
      certificateAndKey:
        secretName: my-source-secret
        certificate: public.crt
        key: private.key
  # ...
  producer:
    authentication:
      type: tls
      certificateAndKey:
        secretName: my-target-secret
        certificate: public.crt
        key: private.key
  # ...

3.4.7.1.2. SCRAM-SHA-512 authentication

To use the authentication using the SCRAM-SHA-512 SASL mechanism, set the type property to the value scram-sha-512. It is possible to use it only if the broker listener, clients are connecting to, is configured to use it. SCRAM-SHA-512 uses a username and password to authenticate. Specify the username in the username property. Specify the password as a link to a Secret containing the password in the passwordSecret property. It has to specify the name of the Secret containing the password and the name of the key under which the password is stored inside the Secret.

An example showing SCRAM-SHA-512 client authentication configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaMirrorMaker
metadata:
  name: my-mirror-maker
spec:
  # ...
  consumer:
    authentication:
      type: scram-sha-512
      username: my-source-user
      passwordSecret:
        secretName: my-source-user
        password: password
  # ...
  producer:
    authentication:
      type: scram-sha-512
      username: my-producer-user
      passwordSecret:
        secretName: my-producer-user
        password: password
  # ...

3.4.7.2. Configuring TLS client authentication in Kafka Mirror Maker

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator with a tls listener with tls authentication enabled

Procedure

As the Kafka Mirror Maker connects to two Kafka clusters (source and target), you can choose to configure TLS client authentication for one or both the clusters. The following steps describe how to configure TLS client authentication on the consumer side for connecting to the source Kafka cluster:

  1. Find out the name of the Secret with the public and private keys which should be used for TLS Client Authentication and the keys under which they are stored in the Secret. If such a Secret does not exist yet, prepare the keys in a file and create the Secret.

    On OpenShift this can be done using oc create:

    oc create secret generic <my-secret> --from-file=<my-public.crt> --from-file=<my-private.key>
  2. Edit the KafkaMirrorMaker.spec.consumer.authentication property. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaMirrorMaker
    metadata:
      name: my-mirror-maker
    spec:
      # ...
      consumer:
        authentication:
          type: tls
          certificateAndKey:
            secretName: my-secret
            certificate: my-public.crt
            key: my-private.key
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f <your-file>

Repeat the above steps for configuring TLS client authentication on the target Kafka cluster. In this case, the secret containing the certificate has to be configured in the KafkaMirrorMaker.spec.producer.authentication property.

3.4.7.3. Configuring SCRAM-SHA-512 authentication in Kafka Mirror Maker

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator with a listener configured for SCRAM-SHA-512 authentication
  • Username to be used for authentication

Procedure

As the Kafka Mirror Maker connects to two Kafka clusters (source and target), you can choose to configure SCRAM-SHA-512 authentication for one or both the clusters. The following steps describe how to configure SCRAM-SHA-512 authentication on the consumer side for connecting to the source Kafka cluster:

  1. Find out the name of the Secret with the password which should be used for authentication and the key under which the password is stored in the Secret. If such a Secret does not exist yet, prepare a file with the password and create the Secret.

    On OpenShift this can be done using oc create:

    echo -n '1f2d1e2e67df' > <my-password.txt>
    oc create secret generic <my-secret> --from-file=<my-password.txt>
  2. Edit the KafkaMirrorMaker.spec.consumer.authentication property. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaMirrorMaker
    metadata:
      name: my-mirror-maker
    spec:
      # ...
      consumer:
        authentication:
          type: scram-sha-512
          username: _<my-username>_
          passwordSecret:
            secretName: _<my-secret>_
            password: _<my-password.txt>_
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f <your-file>

Repeat the above steps for configuring SCRAM-SHA-512 authentication on the target Kafka cluster. In this case, the secret containing the certificate has to be configured in the KafkaMirrorMaker.spec.producer.authentication property.

3.4.8. Kafka Mirror Maker configuration

AMQ Streams allows you to customize the configuration of the Kafka Mirror Maker by editing most of the options for the related consumer and producer. Producer options are listed in Apache Kafka documentation. Consumer options are listed in Apache Kafka documentation.

The only options which cannot be configured are those related to the following areas:

  • Kafka cluster bootstrap address
  • Security (Encryption, Authentication, and Authorization)
  • Consumer group identifier

These options are automatically configured by AMQ Streams.

3.4.8.1. Kafka Mirror Maker configuration

Kafka Mirror Maker can be configured using the config sub-property in KafkaMirrorMaker.spec.consumer and KafkaMirrorMaker.spec.producer. This property should contain the Kafka Mirror Maker consumer and producer configuration options as keys. The values could be in one of the following JSON types:

  • String
  • Number
  • Boolean

Users can specify and configure the options listed in the Apache Kafka documentation and Apache Kafka documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

  • ssl.
  • sasl.
  • security.
  • bootstrap.servers
  • group.id

When one of the forbidden options is present in the config property, it will be ignored and a warning message will be printed to the Custer Operator log file. All other options will be passed to Kafka Mirror Maker.

Important

The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka Mirror Maker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.consumer.config or KafkaMirrorMaker.spec.producer.config object should be fixed and the cluster operator will roll out the new configuration for Kafka Mirror Maker.

An example showing Kafka Mirror Maker configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaMirroMaker
metadata:
  name: my-mirror-maker
spec:
  # ...
  consumer:
    config:
      max.poll.records: 100
      receive.buffer.bytes: 32768
  producer:
    config:
      compression.type: gzip
      batch.size: 8192
  # ...

3.4.8.2. Configuring Kafka Mirror Maker

Prerequisites

  • Two running Kafka clusters (source and target)
  • A running Cluster Operator

Procedure

  1. Edit the KafkaMirrorMaker.spec.consumer.config and KafkaMirrorMaker.spec.producer.config properties. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaMirroMaker
    metadata:
      name: my-mirror-maker
    spec:
      # ...
      consumer:
        config:
          max.poll.records: 100
          receive.buffer.bytes: 32768
      producer:
        config:
          compression.type: gzip
          batch.size: 8192
      # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f <your-file>

3.4.9. CPU and memory resources

For every deployed container, AMQ Streams allows you to specify the resources which should be reserved for it and the maximum resources that can be consumed by it. AMQ Streams supports two types of resources:

  • Memory
  • CPU

AMQ Streams is using the OpenShift syntax for specifying CPU and memory resources.

3.4.9.1. Resource limits and requests

Resource limits and requests can be configured using the resources property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
3.4.9.1.1. Resource requests

Requests specify the resources that will be reserved for a given container. Reserving the resources will ensure that they are always available.

Important

If the resource request is for more than the available free resources in the OpenShift cluster, the pod will not be scheduled.

Resource requests can be specified in the request property. The resource requests currently supported by AMQ Streams are memory and CPU. Memory is specified under the property memory. CPU is specified under the property cpu.

An example showing resource request configuration

# ...
resources:
  requests:
    cpu: 12
    memory: 64Gi
# ...

It is also possible to specify a resource request just for one of the resources:

An example showing resource request configuration with memory request only

# ...
resources:
  requests:
    memory: 64Gi
# ...

Or:

An example showing resource request configuration with CPU request only

# ...
resources:
  requests:
    cpu: 12
# ...

3.4.9.1.2. Resource limits

Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not be always available. The container can use the resources up to the limit only when they are available. The resource limits should be always higher than the resource requests.

Resource limits can be specified in the limits property. The resource limits currently supported by AMQ Streams are memory and CPU. Memory is specified under the property memory. CPU is specified under the property cpu.

An example showing resource limits configuration

# ...
resources:
  limits:
    cpu: 12
    memory: 64Gi
# ...

It is also possible to specify the resource limit just for one of the resources:

An example showing resource limit configuration with memory request only

# ...
resources:
  limits:
    memory: 64Gi
# ...

Or:

An example showing resource limits configuration with CPU request only

# ...
resources:
  requests:
    cpu: 12
# ...

3.4.9.1.3. Supported CPU formats

CPU requests and limits are supported in the following formats:

  • Number of CPU cores as integer (5 CPU core) or decimal (2.5 CPU core).
  • Number or millicpus / millicores (100m) where 1000 millicores is the same 1 CPU core.

An example of using different CPU units

# ...
resources:
  requests:
    cpu: 500m
  limits:
    cpu: 2.5
# ...

Note

The amount of computing power of 1 CPU core might differ depending on the platform where the OpenShift is deployed.

For more details about the CPU specification, see the Meaning of CPU website.

3.4.9.1.4. Supported memory formats

Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.

  • To specify memory in megabytes, use the M suffix. For example 1000M.
  • To specify memory in gigabytes, use the G suffix. For example 1G.
  • To specify memory in mebibytes, use the Mi suffix. For example 1000Mi.
  • To specify memory in gibibytes, use the Gi suffix. For example 1Gi.

An example of using different memory units

# ...
resources:
  requests:
    memory: 512Mi
  limits:
    memory: 2Gi
# ...

For more details about the memory specification and additional supported units, see the Meaning of memory website.

3.4.9.1.5. Additional resources

3.4.9.2. Configuring resource requests and limits

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the resources property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        resources:
          requests:
            cpu: "8"
            memory: 64Gi
          limits:
            cpu: "12"
            memory: 128Gi
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

Additional resources

3.4.10. Logging

Logging enables you to diagnose error and performance issues of AMQ Streams. For the logging, various logger implementations are used. Kafka and Zookeeper use log4j logger and Topic Operator, User Operator, and other components use log4j2 logger.

This section provides information about different loggers and describes how to configure log levels.

You can set the log levels by specifying the loggers and their levels directly (inline) or by using a custom (external) config map.

3.4.10.1. Using inline logging setting

Procedure

  1. Edit the YAML file to specify the loggers and their level for the required components. For example:

    apiVersion: {KafkaApiVersion}
    kind: Kafka
    spec:
      kafka:
        # ...
        logging:
          type: inline
          loggers:
           logger.name: "INFO"
        # ...

    In the above example, the log level is set to INFO. You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. For more information about the log levels, see log4j manual.

  2. Create or update the Kafka resource in OpenShift.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.4.10.2. Using external ConfigMap for logging setting

Procedure

  1. Edit the YAML file to specify the name of the ConfigMap which should be used for the required components. For example:

    apiVersion: {KafkaApiVersion}
    kind: Kafka
    spec:
      kafka:
        # ...
        logging:
          type: external
          name: customConfigMap
        # ...

    Remember to place your custom ConfigMap under log4j.properties eventually log4j2.properties key.

  2. Create or update the Kafka resource in OpenShift.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.4.10.3. Loggers

AMQ Streams consists of several components. Each component has its own loggers and is configurable. This section provides information about loggers of various components.

Components and their loggers are listed below.

  • Kafka

    • kafka.root.logger.level
    • log4j.logger.org.I0Itec.zkclient.ZkClient
    • log4j.logger.org.apache.zookeeper
    • log4j.logger.kafka
    • log4j.logger.org.apache.kafka
    • log4j.logger.kafka.request.logger
    • log4j.logger.kafka.network.Processor
    • log4j.logger.kafka.server.KafkaApis
    • log4j.logger.kafka.network.RequestChannel$
    • log4j.logger.kafka.controller
    • log4j.logger.kafka.log.LogCleaner
    • log4j.logger.state.change.logger
    • log4j.logger.kafka.authorizer.logger
  • Zookeeper

    • zookeeper.root.logger
  • Kafka Connect and Kafka Connect with Source2Image support

    • connect.root.logger.level
    • log4j.logger.org.apache.zookeeper
    • log4j.logger.org.I0Itec.zkclient
    • log4j.logger.org.reflections
  • Kafka Mirror Maker

    • mirrormaker.root.logger
  • Topic Operator

    • rootLogger.level
  • User Operator

    • rootLogger.level

3.4.11. Prometheus metrics

AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and Zookeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404.

3.4.11.1. Metrics configuration

Prometheus metrics can be enabled by configuring the metrics property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

When the metrics property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}).

Example of enabling metrics without any further configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics: {}
    # ...
  zookeeper:
    # ...

The metrics property might contain additional configuration for the Prometheus JMX exporter.

Example of enabling metrics with additional Prometheus JMX Exporter configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics:
      lowercaseOutputName: true
      rules:
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count"
          name: "kafka_server_$1_$2_total"
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count"
          name: "kafka_server_$1_$2_total"
          labels:
            topic: "$3"
    # ...
  zookeeper:
    # ...

3.4.11.2. Configuring Prometheus metrics

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the metrics property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
        metrics:
          lowercaseOutputName: true
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.4.12. JVM Options

Apache Kafka and Apache Zookeeper are running inside of a Java Virtual Machine (JVM). JVM has many configuration options to optimize the performance for different platforms and architectures. AMQ Streams allows configuring some of these options.

3.4.12.1. JVM configuration

JVM options can be configured using the jvmOptions property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

Only a selected subset of available JVM options can be configured. The following options are supported:

-Xms and -Xmx

-Xms configures the minimum initial allocation heap size when the JVM starts. -Xmx configures the maximum heap size.

Note

The units accepted by JVM settings such as -Xmx and -Xms are those accepted by the JDK java binary in the corresponding image. Accordingly, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes

The default values used for -Xms and -Xmx depends on whether there is a memory request limit configured for the container:

  • If there is a memory limit then the JVM’s minimum and maximum memory will be set to a value corresponding to the limit.
  • If there is no memory limit then the JVM’s minimum memory will be set to 128M and the JVM’s maximum memory will not be defined. This allows for the JVM’s memory to grow as-needed, which is ideal for single node environments in test and development.
Important

Setting -Xmx explicitly requires some care:

  • The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by -Xmx.
  • If -Xmx is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it).
  • If -Xmx is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if -Xms is set to -Xmx, or some later time if not).

When setting -Xmx explicitly, it is recommended to:

  • set the memory request and the memory limit to the same value,
  • use a memory request that is at least 4.5 × the -Xmx,
  • consider setting -Xms to the same value as -Xms.
Important

Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.

Example fragment configuring -Xmx and -Xms

# ...
jvmOptions:
  "-Xmx": "2g"
  "-Xms": "2g"
# ...

In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.

Setting the same value for initial (-Xms) and maximum (-Xmx) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and Zookeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation will be multiplied by the number of consumers.

-server

-server enables the server JVM. This option can be set to true or false.

Example fragment configuring -server

# ...
jvmOptions:
  "-server": true
# ...

Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

-XX

-XX object can be used for configuring advanced runtime options of a JVM. The -server and -XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka.

Example showing the use of the -XX object

jvmOptions:
  "-XX":
    "UseG1GC": true,
    "MaxGCPauseMillis": 20,
    "InitiatingHeapOccupancyPercent": 35,
    "ExplicitGCInvokesConcurrent": true,
    "UseParNewGC": false

The example configuration above will result in the following JVM options:

-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

3.4.12.2. Configuring JVM options

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the jvmOptions property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        jvmOptions:
          "-Xmx": "8g"
          "-Xms": "8g"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.4.13. Container images

AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.

3.4.13.1. Container image configurations

Container image which should be used for given components can be specified using the image property in:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The image specified in the component-specific custom resource will be used during deployment. If the image field is missing, the image specified in the Cluster Operator configuration will be used. If the image name is not defined in the Cluster Operator configuration, then the default value will be used.

  • For Kafka brokers:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka:latest container image.
  • For Kafka broker TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-stunnel:latest container image.
  • For Zookeeper nodes:

    1. Container image specified in the STRIMZI_DEFAULT_ZOOKEEPER_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/zookeeper:latest container image.
  • For Zookeeper node TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/zookeeper-stunnel:latest container image.
  • For Topic Operator:

    1. Container image specified in the STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
  • For User Operator:

    1. Container image specified in the STRIMZI_DEFAULT_USER_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/user-operator:latest container image.
  • For Entity Operator TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/entity-operator-stunnel:latest container image.
  • For Kafka Connect:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-connect:latest container image.
  • For Kafka Connect with Source2image support:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE environment variable from the Cluster Operator configuration.
    2. strimzi/kafka-connect-s2i:latest container image.
Warning

Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.

Example of container image configuration

apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    image: my-org/my-image:latest
    # ...
  zookeeper:
    # ...

3.4.13.2. Configuring container images

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the image property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        image: my-org/my-image:latest
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.4.14. Configuring pod scheduling

Important

When two application are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.

3.4.14.1. Scheduling pods based on other applications

3.4.14.1.1. Avoid critical applications to share the node

Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.

3.4.14.1.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.4.14.1.3. Configuring pod anti-affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the affinity property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. The topologyKey should be set to kubernetes.io/hostname to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        affinity:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                    - key: application
                      operator: In
                      values:
                        - postgresql
                        - mongodb
                topologyKey: "kubernetes.io/hostname"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.4.14.2. Scheduling pods to specific nodes

3.4.14.2.1. Node scheduling

The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.

OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type or custom labels to select the right node.

3.4.14.2.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.4.14.2.3. Configuring node affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Label the nodes where AMQ Streams components should be scheduled.

    On OpenShift this can be done using oc label:

    oc label node your-node node-type=fast-network

    Alternatively, some of the existing labels might be reused.

  2. Edit the affinity property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
                - matchExpressions:
                  - key: node-type
                    operator: In
                    values:
                    - fast-network
        # ...
      zookeeper:
        # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.4.14.3. Using dedicated nodes

3.4.14.3.1. Dedicated nodes

Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.

Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.

To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.

3.4.14.3.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.4.14.3.3. Tolerations

Tolerations ca be configured using the tolerations property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

The format of the tolerations property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.

3.4.14.3.4. Setting up dedicated nodes and scheduling pods on them

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Select the nodes which should be used as dedicated
  2. Make sure there are no workloads scheduled on these nodes
  3. Set the taints on the selected nodes

    On OpenShift this can be done using oc adm taint:

    oc adm taint node your-node dedicated=Kafka:NoSchedule
  4. Additionally, add a label to the selected nodes as well.

    On OpenShift this can be done using oc label:

    oc label node your-node dedicated=Kafka
  5. Edit the affinity and tolerations properties in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: Kafka
    spec:
      kafka:
        # ...
        tolerations:
          - key: "dedicated"
            operator: "Equal"
            value: "Kafka"
            effect: "NoSchedule"
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: dedicated
                  operator: In
                  values:
                  - Kafka
        # ...
      zookeeper:
        # ...
  6. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.4.15. List of resources created as part of Kafka Mirror Maker

The following resources will created by the Cluster Operator in the OpenShift cluster:

<mirror-maker-name>-mirror-maker
Deployment which is in charge to create the Kafka Mirror Maker pods.
<mirror-maker-name>-config
ConfigMap which contains the Kafka Mirror Maker ancillary configuration and is mounted as a volume by the Kafka broker pods.