Chapter 2. Enhancements

The enhancements added in this release are outlined below.

2.1. Kafka 2.5.0 enhancements

For an overview of the enhancements introduced with Kafka 2.5.0, refer to the Kafka 2.5.0 Release Notes.

2.2. ZooKeeper enhancements

Moving to support for ZooKeeper 3.5.* versions means that it is no longer necessary to use TLS sidecars in ZooKeeper pods. The sidecars were removed, and native ZooKeeper support for TLS is now used.

2.3. Expanded OAuth 2.0 authentication configuration options

New configuration options make it possible to integrate with a wider set of authorization servers.

Depending on how you apply OAuth 2.0 authentication, and the type of authorization server, there are additional (optional) configuration settings you can use.

Additional configuration options for Kafka brokers

  # ...
  authentication:
    type: oauth
    # ...
    checkIssuer: false 1
    fallbackUserNameClaim: client_id 2
    fallbackUserNamePrefix: client-account- 3
    validTokenType: bearer 4
    userInfoEndpointUri: https://AUTH-SERVER-ADDRESS/auth/realms/external/protocol/openid-connect/userinfo 5

1
If your authorization server does not provide an iss claim, it is not possible to perform an issuer check. In this situation, set checkIssuer to false and do not specify a validIssuerUri. Default is true.
2
An authorization server may not provide a single attribute to identify both regular users and clients. A client authenticating in its own name might provide a client ID. But a user authenticating using a username and password, to obtain a refresh token or an access token, might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available.
3
In situations where fallbackUserNameClaim is applicable, it may also be necessary to prevent name collisions between the values of the username claim, and those of the fallback username claim. Consider a situation where a client called producer exists, but also a regular user called producer exists. In order to differentiate between the two, you can use this property to add a prefix to the user ID of the client.
4
(Only applicable when using an introspection endpoint URI) Depending on the authorization server you are using, the introspection endpoint may or may not return the token type attribute, or it may contain different values. You can specify a valid token type value that the response from the introspection endpoint has to contain.
5
(Only applicable when using an introspection endpoint URI) The authorization server may be configured or implemented in such a way to not provide any identifiable information in an Introspection Endpoint response. In order to obtain the user ID, you can configure the URI of the userinfo endpoint as a fallback. The userNameClaim, fallbackUserNameClaim, and fallbackUserNamePrefix settings are applied to the response of userinfo endpoint.

Additional configuration options for Kafka components

# ...
spec:
  # ...
  authentication:
    # ...
    disableTlsHostnameVerification: true
    checkAccessTokenType: false
    accessTokenIsJwt: false
    scope: any 1

1
The scope for requesting the token from the token endpoint. An authorization server may require a client to specify the scope.

See Configuring OAuth 2.0 support for Kafka brokers and Configuring OAuth 2.0 for Kafka components.

2.4. Metrics for the Cluster Operator, Topic Operator and User Operator

The Prometheus server is not supported as part of the AMQ Streams distribution. However, the Prometheus endpoint and JMX exporter used to expose the metrics are supported.

As well as monitoring Kafka components, you can now add metrics configuration to monitor the Cluster Operator, Topic Operator and User Operator.

An example Grafana dashboard is provided, which allows you to view the following operator metrics exposed by Prometheus:

  • The number of custom resources being processed
  • The number of reconciliations being processed
  • JVM metrics

See Introducing Metrics to Kafka.

2.5. SSL configuration options

You can now run external listeners on specific ciphers for a TLS version.

For Kafka components running on AMQ Streams, you can use the three allowed ssl configuration options to run external listeners with a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer.

The example here shows the configuration for a Kafka resource:

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  # ...
  config: 1
    # ...
    ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 2
    ssl.enabled.protocols: "TLSv1.2" 3
    ssl.protocol: "TLSv1.2" 4
2
The cipher suite for TLS using a combination of ECDHE key exchange mechanism, RSA authentication algorithm, AES bulk encyption algorithm and SHA384 MAC algorithm.
3
The SSl protocol TLSv1.2 is enabled.
4
Specifies the TLSv1.2 protocol to generate the SSL context.

See Custom Resource API Reference.

2.6. Cross-Origin Resource Sharing (CORS) for Kafka Bridge

You can now enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP methods to access them. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster.

HTTP configuration for the Kafka Bridge

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaBridge
metadata:
  name: my-bridge
spec:
  # ...
  http:
    port: 8080
    cors:
      allowedOrigins: "https://strimzi.io" 1
      allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" 2
  # ...

1
Comma-separated list of allowed CORS origins. You can use a URL or a Java regular expression.
2
Comma-separated list of allowed HTTP methods for CORS.

See Kafka Bridge HTTP configuration.