Chapter 2. Enhancements

The enhancements added in this release are outlined below.

2.1. Kafka 2.7.0 enhancements

For an overview of the enhancements introduced with Kafka 2.7.0, refer to the Kafka 2.7.0 Release Notes.

2.2. Configuring the Deployment strategy

You can now configure the Deployment strategy for Kafka Connect, MirrorMaker, and the Kafka Bridge.

The RollingUpdate strategy is used by default for all resources. During a rolling update of a Kafka cluster, the old and new pods in the Deployment are run in parallel. This is the optimal strategy for most use cases.

To reduce resource consumption, you can choose the Recreate strategy. With this strategy, during a rolling update, the old pods in the Deployment are terminated before any new pods are created.

You set the Deployment strategy in spec.template.deployment in the KafkaConnect, KafkaMirrorMaker, KafkaMirrorMaker2, and KafkaBridge resources.

Example of Recreate Deployment strategy for Kafka Connect

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-connect-cluster
spec:
  #...
  template:
    deployment:
      deploymentStrategy: Recreate
  #...

If spec.template.deployment is not configured, the RollingUpdate strategy is used.

See DeploymentTemplate schema reference

2.3. Disabling Owner Reference in CA Secrets

Cluster and Client CA Secrets are created with an ownerReference field, which is set to the Kafka custom resource.

Now, you can disable the CA Secrets ownerReference by adding the generateSecretOwnerReference: false property to your Kafka cluster configuration. If the ownerReference for a CA Secret is disabled, the Secret is not deleted by OpenShift when the corresponding Kafka custom resource is deleted. The CA Secret is then available for reuse with a new Kafka cluster.

Example configuration to disable ownerReference in Cluster and Client CA Secrets

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
# ...
spec:
# ...
  clusterCa:
    generateCertificateAuthority: true
    generateSecretOwnerReference: false
  clientsCa:
    generateCertificateAuthority: true
    generateSecretOwnerReference: false
# ...

See Disabling ownerReference in the CA Secrets

2.4. Prefix for Kafka user Secret name

You can now use the secretPrefix property to configure the User Operator, which adds a prefix to all secret names created for a KafkaUser resource.

For example, this configuration:

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    userOperator:
      secretPrefix: kafka-
    # ...

Creates a secret named kafka-my-user for a user named my-user.

See EntityUserOperatorSpec schema reference

2.5. Rolling individual Kafka and ZooKeeper pods through the Cluster Operator

Using an annotation, you can manually trigger a rolling update of an existing pod that is part of the Kafka cluster or ZooKeeper cluster StatefulSets. When multiple pods from the same StatefulSet are annotated at the same time, consecutive rolling updates are performed within the same reconciliation run.

See Performing a rolling update using a Pod annotation

2.6. Topic Operator topic store

AMQ Streams no longer uses ZooKeeper to store topic metadata. Topic metadata is now brought into the Kafka cluster, and under the control of the Topic Operator.

This change is required to prepare AMQ Streams for the future removal of ZooKeeper as a Kafka dependency.

The Topic Operator now uses persistent storage to store topic metadata describing topic configuration as key-value pairs. Topic metadata is accessed locally in-memory. Updates from operations applied to the local in-memory topic store are persisted to a backup topic store on disk. The topic store is continually synchronized with updates from Kafka topics.

When upgrading to AMQ Streams 1.7, the transition to Topic Operator control of the topic store is seamless. Metadata is found and migrated from ZooKeeper, and the old store is cleansed.

New internal topics

To support the handling of topic metadata in the topic store, two new internal topics are created in your Kafka cluster when you upgrade to AMQ Streams 1.7:

Internal topic nameDescription

__strimzi_store_topic

Input topic for storing the topic metadata.

__strimzi-topic-operator-kstreams-topic-store-changelog

Retains a log of compacted topic store values.

Warning

Do not delete these topics, as they are essential to the running of the Topic Operator.

See Topic Operator topic store

2.7. JAAS configuration

The JAAS configuration string in the sasl.jaas.config property has been added to the generated secrets for a KafkaUser with SCRAM-SHA-512 authentication.

See SCRAM-SHA-512 Authentication

2.8. Cluster identification for Kafka status

The KafkaStatus schema is updated to include the clusterId to identify a Kafka cluster. The status property of the Kafka resource provides status information on a Kafka cluster.

Kafka status property

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
  # ...
status:
  conditions:
    lastTransitionTime: "YEAR-MONTH-20T11:37:00.706Z"
    status: "True"
    type: Ready
  observedGeneration: 1
  clusterId: CLUSTER-ID
  # ...

When you retrieve the status of a Kafka resource, the id of the Kafka cluster is also returned:

oc get kafka MY-KAFKA-CLUSTER -o jsonpath='{.status}'

You can also retrieve only the cluster id for the Kafka resource:

oc get kafka MY-KAFKA-CLUSTER -o jsonpath='{.status.clusterId}'

See KafkaStatus schema reference and Finding the status of a custom resource

2.9. Kafka Connect status

When you retrieve the status of a KafkaConnector resource, the list of topics used by the connector is now returned in the topics property.

See KafkaConnectorStatus schema reference and Finding the status of a custom resource

2.10. Running AMQ Streams with read-only root file system

You can now run AMQ Streams with a read-only root file system. Additional volume has been added so that temporary files are written to a mounted /tmp file. Previously, the /tmp directory was used directly from the container.

In this way, the container file system does not need to be modified, and AMQ Streams can run unimpeded from a read-only root file system.

2.11. Example YAML files specify inter-broker protocol version

The example Kafka configuration files provided with AMQ Streams now specify the inter.broker.protocol.version. The inter.broker.protocol.version and log.message.format.version properties for the Kafka config are the versions supported by the specified Kafka version (spec.kafka.version). The properties represent the log format version appended to messages and the version of protocol used in a Kafka cluster. Updates to these properties are required when upgrading your Kafka version.

Specified Kafka versions

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    version: 2.7.0
    #...
    config:
      #...
      log.message.format.version: 2.7
      inter.broker.protocol.version: 2.7

See Upgrading Kafka

2.12. Restricting Cluster Operator access with network policy

The Cluster Operator can run in the same namespace as the resources it manages, or in a separate namespace. Two new environment variables now control which namespaces can access the Cluster Operator.

By default, the STRIMZI_OPERATOR_NAMESPACE environment variable is configured to use the Kubernetes Downward API to find which namespace the Cluster Operator is running in. If the Cluster Operator is running in the same namespace as the resources, only local access is required, and allowed by Strimzi.

If the Cluster Operator is running in a separate namespace to the resources it manages, any namespace in the Kubernetes cluster is allowed access to the Cluster Operator unless network policy is configured. Use the optional STRIMZI_OPERATOR_NAMESPACE_LABELS environment variable to establish network policy for the Cluster Operator using namespace labels. By adding namespace labels, access to the Cluster Operator is restricted to the namespaces specified.

Network policy configured for the Cluster Operator deployment

#...
env:
  - name: STRIMZI_OPERATOR_NAMESPACE_LABELS
    value: label1=value1,label2=value2
  #...

See Cluster Operator configuration

2.13. Adding labels and annotations to Secrets

By configuring the clusterCaCert template property in the Kafka custom resource, you can add custom labels and annotations to the Cluster CA Secrets created by the Cluster Operator. Labels and annotations are useful for identifying objects and adding contextual information. You configure template properties in Strimzi custom resources.

Example template customization to add labels and annotations to Secrets

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    template:
      clusterCaCert:
        metadata:
          labels:
            label1: value1
            label2: value2
          annotations:
            annotation1: value1
            annotation2: value2
    # ...

See Customizing OpenShift resources

2.14. Pausing reconciliation of custom resources

You can pause the reconciliation of a custom resource by setting the strimzi.io/pause-reconciliation annotation to true in its configuration. For example, you can apply the annotation to the KafkaConnect resource so that reconciliation by the Cluster Operator is paused.

Example custom resource with a paused reconciliation condition type

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  annotations:
    strimzi.io/pause-reconciliation: "true"
    strimzi.io/use-connector-resources: "true"
  creationTimestamp: 2021-03-12T10:47:11Z
  #...
spec:
  # ...
status:
  conditions:
  - lastTransitionTime: 2021-03-12T10:47:41.689249Z
    status: "True"
    type: ReconciliationPaused

Important

It is not currently possible to pause reconciliation of KafkaTopic resources.

See Pausing reconciliation of custom resources

2.15. Restarting connectors and tasks

Connector instances and their tasks can now be restarted by using Kubernetes annotations on the relevant custom resources.

You can restart connectors for both Kafka Connect and MirrorMaker 2.0, which uses the Kafka Connect framework to replicate data between the source and target Kafka clusters.

  • To restart a Kafka Connect connector, you annotate the corresponding KafkaConnector custom resource.
  • To restart a MirrorMaker 2.0 connector, you annotate the corresponding KafkaMirrorMaker2 custom resource.

The annotations can also be used to restart a specified task for a connector.

For Kafka Connect, see Performing a restart of a Kafka connector and Performing a restart of a Kafka connector task.

For MirrorMaker 2.0, see Performing a restart of a Kafka MirrorMaker 2.0 connector and Performing a restart of a Kafka MirrorMaker 2.0 connector task.

2.16. OAuth 2.0 authentication and authorization

This release includes the following enhancements to OAuth 2.0 token-based authentication and authorization in AMQ Streams.

Checks on JWT access tokens

You can now configure two additional checks on JWT access tokens. Both of these checks are configured in the OAuth 2.0 configuration for Kafka broker listeners.

Custom claim checks

Custom claim checks impose custom rules on the validation of JWT access tokens by Kafka brokers. They are defined using JsonPath filter queries.

If an access token does not contain the necessary data, it is rejected. When using introspection endpoint token validation, the custom check is applied to the introspection endpoint response JSON.

To configure custom claim checks, add the customClaimCheck option and define a JsonPath filter query. Custom claim checks are disabled by default.

See Configuring OAuth 2.0 support for Kafka brokers

Audience checks

Your authorization server might provide aud (audience) claims in JWT access tokens.

When audience checks are enabled, the Kafka broker rejects tokens that do not contain the broker’s clientId in their aud claims.

To enable audience checks, set the checkAudience option to true. Audience checks are disabled by default.

See Configuring OAuth 2.0 support for Kafka brokers

Support for OAuth 2.0 over SASL PLAIN authentication

You can now configure the PLAIN mechanism for OAuth 2.0 authentication between Kafka clients and Kafka brokers. Previously, the only supported authentication mechanism was OAUTHBEARER.

PLAIN is a simple authentication mechanism used by all Kafka client tools (including developer tools such as kafkacat). AMQ Streams includes server-side callbacks that enable PLAIN to be used with OAuth 2.0 authentication. These capabilities are referred to as OAuth 2.0 over PLAIN.

Note

Red Hat recommends using OAUTHBEARER authentication for clients whenever possible. OAUTHBEARER provides a higher level of security than PLAIN because client credentials are never shared with Kafka brokers. Consider using PLAIN only with Kafka clients that do not support OAUTHBEARER.

When used with the provided OAuth 2.0 over PLAIN callbacks, Kafka clients can authenticate with Kafka brokers using either of the following methods:

  • Client ID and secret (by using the OAuth 2.0 client credentials mechanism)
  • A long-lived access token, obtained manually at configuration time

To use PLAIN, you must enable it in the oauth listener configuration for the Kafka broker. Three new configuration options are now supported:

  • enableOauthBearer
  • enablePlain
  • tokenEndpointUri

Example oauth listener configuration

  # ...
  name: external
  port: 9094
  type: loadbalancer
  tls: true
  authentication:
    type: oauth
    # ...
    checkIssuer: false
    fallbackUserNameClaim: client_id
    fallbackUserNamePrefix: client-account-
    validTokenType: bearer
    userInfoEndpointUri: https://OAUTH-SERVER-ADDRESS/auth/realms/external/protocol/openid-connect/userinfo
    enableOauthBearer: false 1
    enablePlain: true 2
    tokenEndpointUri: https://OAUTH-SERVER-ADDRESS/auth/realms/external/protocol/openid-connect/token 3
    #...

1
Disables OAUTHBEARER authentication on the listener. If true or the option is not specified, OAUTHBEARER authentication is enabled.
2
Enables PLAIN authentication on the listener. Default is false.
3
The OAuth 2.0 token endpoint URL to your authorization server. Must be set if enablePlain is true, and the client ID and secret are used for authentication.

See OAuth 2.0 authentication mechanisms and Configuring OAuth 2.0 support for Kafka brokers

2.17. Kafka Connect rack property

A new rack property is now available for Kafka Connect. Rack awareness is configured to spread replicas across different racks. By configuring a rack for a Kafka Connect cluster, consumers are allowed to fetch data from the closest replica. This is useful when a Kafka cluster spans multiple datacenters.

A topology key must match the label of a cluster node.

Example rack configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
#...
spec:
  #...
  rack:
    topologyKey: topology.kubernetes.io/zone

See KafkaConnectSpec schema reference and KafkaConnectS2ISpec schema reference

2.18. Pod topology spread constraints

Pod topology spread constraints are now supported for the following AMQ Streams custom resources:

  • Kafka, including:

    • ZooKeeper
    • Entity Operator
  • KafkaConnect
  • KafkaConnectS2I
  • KafkaBridge
  • KafkaMirrorMaker2 and KafkaMirrorMaker

Pod topology spread constraints allow you to distribute Kafka related pods across nodes, zones, regions, or other user-defined domains. You can use them together with the existing affinity and tolerations properties for pod scheduling.

Constraints are specified in the template.pod.topologySpreadConstraints property in the relevant custom resource.

Example pod topology spread constraint for Kafka Connect

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
#...
spec:
  # ...
  template:
    pod:
      topologySpreadConstraints:
      - maxSkew: "1"
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            label1: value1
#...

See: