Deploying and Managing AMQ Streams on OpenShift
Deploy and manage AMQ Streams 2.5 on OpenShift Container Platform
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Deployment overview
AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster.
This guide provides instructions for deploying and managing AMQ Streams. Deployment options and steps are covered using the example installation files included with AMQ Streams. While the guide highlights important configuration considerations, it does not cover all available options. For a deeper understanding of the Kafka component configuration options, refer to the AMQ Streams Custom Resource API Reference.
In addition to deployment instructions, the guide offers pre- and post-deployment guidance. It covers setting up and securing client access to your Kafka cluster. Furthermore, it explores additional deployment options such as metrics integration, distributed tracing, and cluster management tools like Cruise Control and the AMQ Streams Drain Cleaner. You’ll also find recommendations on managing AMQ Streams and fine-tuning Kafka configuration for optimal performance.
Upgrade instructions are provided for both AMQ Streams and Kafka, to help keep your deployment up to date.
AMQ Streams is designed to be compatible with all types of OpenShift clusters, irrespective of their distribution. Whether your deployment involves public or private clouds, or if you are setting up a local development environment, the instructions in this guide are applicable in all cases.
1.1. AMQ Streams custom resources
Deployment of Kafka components to an OpenShift cluster using AMQ Streams is highly configurable through the application of custom resources. These custom resources are created as instances of APIs added by Custom Resource Definitions (CRDs) to extend OpenShift resources.
CRDs act as configuration instructions to describe the custom resources in an OpenShift cluster, and are provided with AMQ Streams for each Kafka component used in a deployment, as well as users and topics. CRDs and custom resources are defined as YAML files. Example YAML files are provided with the AMQ Streams distribution.
CRDs also allow AMQ Streams resources to benefit from native OpenShift features like CLI accessibility and configuration validation.
1.1.1. AMQ Streams custom resource example
CRDs require a one-time installation in a cluster to define the schemas used to instantiate and manage AMQ Streams-specific resources.
After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification.
Depending on the cluster setup, installation typically requires cluster admin privileges.
Access to manage custom resources is limited to AMQ Streams administrators. For more information, see Section 4.5, “Designating AMQ Streams administrators”.
A CRD defines a new kind
of resource, such as kind:Kafka
, within an OpenShift cluster.
The Kubernetes API server allows custom resources to be created based on the kind
and understands from the CRD how to validate and store the custom resource when it is added to the OpenShift cluster.
When a CustomResourceDefinition
is deleted, custom resources of that type are also deleted. Additionally, OpenShift resources created by the custom resource are also deleted, such as Deployment
, Pod
, Service
and ConfigMap
resources.
Each AMQ Streams-specific custom resource conforms to the schema defined by the CRD for the resource’s kind
. The custom resources for AMQ Streams components have common configuration properties, which are defined under spec
.
To understand the relationship between a CRD and a custom resource, let’s look at a sample of the CRD for a Kafka topic.
Kafka topic CRD
apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # ... singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # ... subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 # ...
- 1
- The metadata for the topic CRD, its name and a label to identify the CRD.
- 2
- The specification for this CRD, including the group (domain) name, the plural name and the supported schema version, which are used in the URL to access the API of the topic. The other names are used to identify instance resources in the CLI. For example,
oc get kafkatopic my-topic
oroc get kafkatopics
. - 3
- The shortname can be used in CLI commands. For example,
oc get kt
can be used as an abbreviation instead ofoc get kafkatopic
. - 4
- The information presented when using a
get
command on the custom resource. - 5
- The current status of the CRD as described in the schema reference for the resource.
- 6
- openAPIV3Schema validation provides validation for the creation of topic custom resources. For example, a topic requires at least one partition and one replica.
You can identify the CRD YAML files supplied with the AMQ Streams installation files, because the file names contain an index number followed by ‘Crd’.
Here is a corresponding example of a KafkaTopic
custom resource.
Kafka topic custom resource
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: "2019-08-20T11:37:00.706Z" status: "True" type: Ready observedGeneration: 1 / ...
- 1
- The
kind
andapiVersion
identify the CRD of which the custom resource is an instance. - 2
- A label, applicable only to
KafkaTopic
andKafkaUser
resources, that defines the name of the Kafka cluster (which is same as the name of theKafka
resource) to which a topic or user belongs. - 3
- The spec shows the number of partitions and replicas for the topic as well as the configuration parameters for the topic itself. In this example, the retention period for a message to remain in the topic and the segment file size for the log are specified.
- 4
- Status conditions for the
KafkaTopic
resource. Thetype
condition changed toReady
at thelastTransitionTime
.
Custom resources can be applied to a cluster through the platform CLI. When the custom resource is created, it uses the same validation as the built-in resources of the Kubernetes API.
After a KafkaTopic
custom resource is created, the Topic Operator is notified and corresponding Kafka topics are created in AMQ Streams.
1.2. AMQ Streams operators
AMQ Streams operators are purpose-built with specialist operational knowledge to effectively manage Kafka on OpenShift. Each operator performs a distinct function.
- Cluster Operator
- The Cluster Operator handles the deployment and management of Apache Kafka clusters on OpenShift. It automates the setup of Kafka brokers, and other Kafka components and resources.
- Topic Operator
- The Topic Operator manages the creation, configuration, and deletion of topics within Kafka clusters.
- User Operator
- The User Operator manages Kafka users that require access to Kafka brokers.
When you deploy AMQ Streams, you first deploy the Cluster Operator. The Cluster Operator is then ready to handle the deployment of Kafka. You can also deploy the Topic Operator and User Operator using the Cluster Operator (recommended) or as standalone operators. You would use a standalone operator with a Kafka cluster that is not managed by the Cluster Operator.
The Topic Operator and User Operator are part of the Entity Operator. The Cluster Operator can deploy one or both operators based on the Entity Operator configuration.
To deploy the standalone operators, you need to set environment variables to connect to a Kafka cluster. These environment variables do not need to be set if you are deploying the operators using the Cluster Operator as they will be set by the Cluster Operator.
1.2.1. Watching AMQ Streams resources in OpenShift namespaces
Operators watch and manage AMQ Streams resources in OpenShift namespaces. The Cluster Operator can watch a single namespace, multiple namespaces, or all namespaces in an OpenShift cluster. The Topic Operator and User Operator can watch a single namespace.
-
The Cluster Operator watches for
Kafka
resources -
The Topic Operator watches for
KafkaTopic
resources -
The User Operator watches for
KafkaUser
resources
The Topic Operator and the User Operator can only watch a single Kafka cluster in a namespace. And they can only be connected to a single Kafka cluster.
If multiple Topic Operators watch the same namespace, name collisions and topic deletion can occur. This is because each Kafka cluster uses Kafka topics that have the same name (such as __consumer_offsets
). Make sure that only one Topic Operator watches a given namespace.
When using multiple User Operators with a single namespace, a user with a given username can exist in more than one Kafka cluster.
If you deploy the Topic Operator and User Operator using the Cluster Operator, they watch the Kafka cluster deployed by the Cluster Operator by default. You can also specify a namespace using watchedNamespace
in the operator configuration.
For a standalone deployment of each operator, you specify a namespace and connection to the Kafka cluster to watch in the configuration.
1.2.2. Managing RBAC resources
The Cluster Operator creates and manages role-based access control (RBAC) resources for AMQ Streams components that need access to OpenShift resources.
For the Cluster Operator to function, it needs permission within the OpenShift cluster to interact with Kafka resources, such as Kafka
and KafkaConnect
, as well as managed resources like ConfigMap
, Pod
, Deployment
, and Service
.
Permission is specified through the following OpenShift RBAC resources:
-
ServiceAccount
-
Role
andClusterRole
-
RoleBinding
andClusterRoleBinding
1.2.2.1. Delegating privileges to AMQ Streams components
The Cluster Operator runs under a service account called strimzi-cluster-operator
. It is assigned cluster roles that give it permission to create the RBAC resources for AMQ Streams components. Role bindings associate the cluster roles with the service account.
OpenShift prevents components operating under one ServiceAccount
from granting another ServiceAccount
privileges that the granting ServiceAccount
does not have. Because the Cluster Operator creates the RoleBinding
and ClusterRoleBinding
RBAC resources needed by the resources it manages, it requires a role that gives it the same privileges.
The following tables describe the RBAC resources created by the Cluster Operator.
Table 1.1. ServiceAccount
resources
Name | Used by |
---|---|
| Kafka broker pods |
| ZooKeeper pods |
| Kafka Connect pods |
| MirrorMaker pods |
| MirrorMaker 2 pods |
| Kafka Bridge pods |
| Entity Operator |
Table 1.2. ClusterRole
resources
Name | Used by |
---|---|
| Cluster Operator |
| Cluster Operator |
| Cluster Operator |
| Cluster Operator, rack feature (when used) |
| Cluster Operator, Topic Operator, User Operator |
| Cluster Operator, Kafka clients for rack awareness |
Table 1.3. ClusterRoleBinding
resources
Name | Used by |
---|---|
| Cluster Operator |
| Cluster Operator, Kafka brokers for rack awareness |
| Cluster Operator, Kafka clients for rack awareness |
Table 1.4. RoleBinding
resources
Name | Used by |
---|---|
| Cluster Operator |
| Cluster Operator, Kafka brokers for rack awareness |
1.2.2.2. Running the Cluster Operator using a ServiceAccount
The Cluster Operator is best run using a ServiceAccount
.
Example ServiceAccount
for the Cluster Operator
apiVersion: v1 kind: ServiceAccount metadata: name: strimzi-cluster-operator labels: app: strimzi
The Deployment
of the operator then needs to specify this in its spec.template.spec.serviceAccountName
.
Partial example of Deployment
for the Cluster Operator
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: strimzi-cluster-operator strimzi.io/kind: cluster-operator template: metadata: labels: name: strimzi-cluster-operator strimzi.io/kind: cluster-operator spec: serviceAccountName: strimzi-cluster-operator # ...
1.2.2.3. ClusterRole
resources
The Cluster Operator uses ClusterRole
resources to provide the necessary access to resources. Depending on the OpenShift cluster setup, a cluster administrator might be needed to create the cluster roles.
Cluster administrator rights are only needed for the creation of ClusterRole
resources. The Cluster Operator will not run under a cluster admin account.
ClusterRole
resources follow the principle of least privilege and contain only those privileges needed by the Cluster Operator to operate the cluster of the Kafka component. The first set of assigned privileges allow the Cluster Operator to manage OpenShift resources such as Deployment
, Pod
, and ConfigMap
.
All cluster roles are required by the Cluster Operator in order to delegate privileges.
The Cluster Operator uses the strimzi-cluster-operator-namespaced
and strimzi-cluster-operator-global
cluster roles to grant permission at the namespace-scoped resources level and cluster-scoped resources level.
ClusterRole
with namespaced resources for the Cluster Operator
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-namespaced labels: app: strimzi rules: # Resources in this role are used by the operator based on an operand being deployed in some namespace. When needed, you # can deploy the operator as a cluster-wide operator. But grant the rights listed in this role only on the namespaces # where the operands will be deployed. That way, you can limit the access the operator has to other namespaces where it # does not manage any clusters. - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to access and manage rolebindings to grant Strimzi components cluster permissions - rolebindings verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to access and manage roles to grant the entity operator permissions - roles verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "" resources: # The cluster operator needs to access and delete pods, this is to allow it to monitor pod health and coordinate rolling updates - pods # The cluster operator needs to access and manage service accounts to grant Strimzi components cluster permissions - serviceaccounts # The cluster operator needs to access and manage config maps for Strimzi components configuration - configmaps # The cluster operator needs to access and manage services and endpoints to expose Strimzi components to network traffic - services - endpoints # The cluster operator needs to access and manage secrets to handle credentials - secrets # The cluster operator needs to access and manage persistent volume claims to bind them to Strimzi components for persistent data - persistentvolumeclaims verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "apps" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale - deployments/status # The cluster operator needs to access and manage stateful sets to run stateful sets based Strimzi components - statefulsets # The cluster operator needs to access replica-sets to manage Strimzi components and to determine error states - replicasets verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "" # legacy core events api, used by topic operator - "events.k8s.io" # new events api, used by cluster operator resources: # The cluster operator needs to be able to create events and delegate permissions to do so - events verbs: - create - apiGroups: # Kafka Connect Build on OpenShift requirement - build.openshift.io resources: - buildconfigs - buildconfigs/instantiate - builds verbs: - get - list - watch - create - delete - patch - update - apiGroups: - networking.k8s.io resources: # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies # The cluster operator needs to access and manage ingresses which allow external access to the services in a cluster - ingresses verbs: - get - list - watch - create - delete - patch - update - apiGroups: - route.openshift.io resources: # The cluster operator needs to access and manage routes to expose Strimzi components for external access - routes - routes/custom-host verbs: - get - list - watch - create - delete - patch - update - apiGroups: - image.openshift.io resources: # The cluster operator needs to verify the image stream when used for Kafka Connect image build - imagestreams verbs: - get - apiGroups: - policy resources: # The cluster operator needs to access and manage pod disruption budgets this limits the number of concurrent disruptions # that a Strimzi component experiences, allowing for higher availability - poddisruptionbudgets verbs: - get - list - watch - create - delete - patch - update
ClusterRole
with cluster-scoped resources for the Cluster Operator
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-global labels: app: strimzi rules: - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to create and manage cluster role bindings in the case of an install where a user # has specified they want their cluster role bindings generated - clusterrolebindings verbs: - get - list - watch - create - delete - patch - update - apiGroups: - storage.k8s.io resources: # The cluster operator requires "get" permissions to view storage class details # This is because only a persistent volume of a supported storage class type can be resized - storageclasses verbs: - get - apiGroups: - "" resources: # The cluster operator requires "list" permissions to view all nodes in a cluster # The listing is used to determine the node addresses when NodePort access is configured # These addresses are then exposed in the custom resource states - nodes verbs: - list
The strimzi-cluster-operator-leader-election
cluster role represents the permissions needed for the leader election.
ClusterRole
with leader election permissions
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resources: # The cluster operator needs to access and manage leases for leader election # The "create" verb cannot be used with "resourceNames" - leases verbs: - create - apiGroups: - coordination.k8s.io resources: # The cluster operator needs to access and manage leases for leader election - leases resourceNames: # The default RBAC files give the operator only access to the Lease resource names strimzi-cluster-operator # If you want to use another resource name or resource namespace, you have to configure the RBAC resources accordingly - strimzi-cluster-operator verbs: - get - list - watch - delete - patch - update
The strimzi-kafka-broker
cluster role represents the access needed by the init container in Kafka pods that use rack awareness.
A role binding named strimzi-<cluster_name>-kafka-init
grants the <cluster_name>-kafka
service account access to nodes within a cluster using the strimzi-kafka-broker
role. If the rack feature is not used and the cluster is not exposed through nodeport
, no binding is created.
ClusterRole
for the Cluster Operator allowing it to delegate access to OpenShift nodes to the Kafka broker pods
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-broker labels: app: strimzi rules: - apiGroups: - "" resources: # The Kafka Brokers require "get" permissions to view the node they are on # This information is used to generate a Rack ID that is used for High Availability configurations - nodes verbs: - get
The strimzi-entity-operator
cluster role represents the access needed by the Topic Operator and User Operator.
The Topic Operator produces OpenShift events with status information, so the <cluster_name>-entity-operator
service account is bound to the strimzi-entity-operator
role, which grants this access via the strimzi-entity-operator
role binding.
ClusterRole
for the Cluster Operator allowing it to delegate access to events to the Topic and User Operators
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-entity-operator labels: app: strimzi rules: - apiGroups: - "kafka.strimzi.io" resources: # The entity operator runs the KafkaTopic assembly operator, which needs to access and manage KafkaTopic resources - kafkatopics - kafkatopics/status # The entity operator runs the KafkaUser assembly operator, which needs to access and manage KafkaUser resources - kafkausers - kafkausers/status verbs: - get - list - watch - create - patch - update - delete - apiGroups: - "" resources: - events verbs: # The entity operator needs to be able to create events - create - apiGroups: - "" resources: # The entity operator user-operator needs to access and manage secrets to store generated credentials - secrets verbs: - get - list - watch - create - delete - patch - update
The strimzi-kafka-client
cluster role represents the access needed by Kafka clients that use rack awareness.
ClusterRole
for the Cluster Operator allowing it to delegate access to OpenShift nodes to the Kafka client-based pods
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-client labels: app: strimzi rules: - apiGroups: - "" resources: # The Kafka clients (Connect, Mirror Maker, etc.) require "get" permissions to view the node they are on # This information is used to generate a Rack ID (client.rack option) that is used for consuming from the closest # replicas when enabled - nodes verbs: - get
1.2.2.4. ClusterRoleBinding
resources
The Cluster Operator uses ClusterRoleBinding
and RoleBinding
resources to associate its ClusterRole
with its ServiceAccount
: Cluster role bindings are required by cluster roles containing cluster-scoped resources.
Example ClusterRoleBinding
for the Cluster Operator
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-global apiGroup: rbac.authorization.k8s.io
Cluster role bindings are also needed for the cluster roles used in delegating privileges:
Example ClusterRoleBinding
for the Cluster Operator and Kafka broker rack awareness
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-broker-delegation labels: app: strimzi # The Kafka broker cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Kafka brokers. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-broker apiGroup: rbac.authorization.k8s.io
Example ClusterRoleBinding
for the Cluster Operator and Kafka client rack awareness
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-client-delegation labels: app: strimzi # The Kafka clients cluster role must be bound to the cluster operator service account so that it can delegate the # cluster role to the Kafka clients using it for consuming from closest replica. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-client apiGroup: rbac.authorization.k8s.io
Cluster roles containing only namespaced resources are bound using role bindings only.
Example RoleBinding
for the Cluster Operator
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-namespaced apiGroup: rbac.authorization.k8s.io
Example RoleBinding
for the Cluster Operator and Kafka broker rack awareness
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-entity-operator-delegation labels: app: strimzi # The Entity Operator cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Entity Operator. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-entity-operator apiGroup: rbac.authorization.k8s.io
1.3. Using the Kafka Bridge to connect with a Kafka cluster
You can use the AMQ Streams Kafka Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol.
When you set up the Kafka Bridge you configure HTTP access to the Kafka cluster. You can then use the Kafka Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface.
Additional resources
- For information on installing and using the Kafka Bridge, see Using the AMQ Streams Kafka Bridge.
1.4. Seamless FIPS support
Federal Information Processing Standards (FIPS) are standards for computer security and interoperability. When running AMQ Streams on a FIPS-enabled OpenShift cluster, the OpenJDK used in AMQ Streams container images automatically switches to FIPS mode. From version 2.4, AMQ Streams can run on FIPS-enabled OpenShift clusters without any changes or special configuration. It uses only the FIPS-compliant security libraries from the OpenJDK.
Minimum password length
When running in the FIPS mode, SCRAM-SHA-512 passwords need to be at least 32 characters long. From AMQ Streams 2.4, the default password length in AMQ Streams User Operator is set to 32 characters as well. If you have a Kafka cluster with custom configuration that uses a password length that is less than 32 characters, you need to update your configuration. If you have any users with passwords shorter than 32 characters, you need to regenerate a password with the required length. You can do that, for example, by deleting the user secret and waiting for the User Operator to create a new password with the appropriate length.
If you are using FIPS-enabled OpenShift clusters, you may experience higher memory consumption compared to regular OpenShift clusters. To avoid any issues, we suggest increasing the memory request to at least 512Mi.
1.5. Document Conventions
User-replaced values
User-replaced values, also known as replaceables, are shown in with angle brackets (< >). Underscores ( _ ) are used for multi-word values. If the value refers to code or commands, monospace
is also used.
For example, the following code shows that <my_namespace>
must be replaced by the correct namespace name:
sed -i 's/namespace: .*/namespace: <my_namespace>' install/cluster-operator/*RoleBinding*.yaml
1.6. Additional resources
Chapter 2. AMQ Streams installation methods
You can install AMQ Streams on OpenShift 4.10 to 4.14 in two ways.
Installation method | Description |
---|---|
Download Red Hat AMQ Streams 2.5 OpenShift Installation and Example Files from the AMQ Streams software downloads page. Deploy the YAML installation artifacts to your OpenShift cluster using
You can also use the
| |
Use the AMQ Streams operator in the OperatorHub to deploy AMQ Streams to a single namespace or all namespaces. |
For the greatest flexibility, choose the installation artifacts method. The OperatorHub method provides a standard configuration and allows you to take advantage of automatic updates.
Installation of AMQ Streams using Helm is not supported.
Chapter 3. What is deployed with AMQ Streams
Apache Kafka components are provided for deployment to OpenShift with the AMQ Streams distribution. The Kafka components are generally run as clusters for availability.
A typical deployment incorporating Kafka components might include:
- Kafka cluster of broker nodes
- ZooKeeper cluster of replicated ZooKeeper instances
- Kafka Connect cluster for external data connections
- Kafka MirrorMaker cluster to mirror the Kafka cluster in a secondary cluster
- Kafka Exporter to extract additional Kafka metrics data for monitoring
- Kafka Bridge to make HTTP-based requests to the Kafka cluster
- Cruise Control to rebalance topic partitions across broker nodes
Not all of these components are mandatory, though you need Kafka and ZooKeeper as a minimum. Some components can be deployed without Kafka, such as MirrorMaker or Kafka Connect.
3.1. Order of deployment
The required order of deployment to an OpenShift cluster is as follows:
- Deploy the Cluster Operator to manage your Kafka cluster
- Deploy the Kafka cluster with the ZooKeeper cluster, and include the Topic Operator and User Operator in the deployment
Optionally deploy:
- The Topic Operator and User Operator standalone if you did not deploy them with the Kafka cluster
- Kafka Connect
- Kafka MirrorMaker
- Kafka Bridge
- Components for the monitoring of metrics
The Cluster Operator creates OpenShift resources for the components, such as Deployment
, Service
, and Pod
resources. The names of the OpenShift resources are appended with the name specified for a component when it’s deployed. For example, a Kafka cluster named my-kafka-cluster
has a service named my-kafka-cluster-kafka
.
Chapter 4. Preparing for your AMQ Streams deployment
Prepare for a deployment of AMQ Streams by completing any necessary pre-deployment tasks. Take the necessary preparatory steps according to your specific requirements, such as the following:
- Ensuring you have the necessary prerequisites before deploying AMQ Streams
- Downloading the AMQ Streams release artifacts to facilitate your deployment
- Pushing the AMQ Streams container images into your own registry (if required)
- Setting up admin roles to enable configuration of custom resources used in the deployment
To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs.
4.1. Deployment prerequisites
To deploy AMQ Streams, you will need the following:
An OpenShift 4.10 to 4.14 cluster.
AMQ Streams is based on Strimzi 0.36.x.
-
The
oc
command-line tool is installed and configured to connect to the running cluster.
4.2. Downloading AMQ Streams release artifacts
To use deployment files to install AMQ Streams, download and extract the files from the AMQ Streams software downloads page.
AMQ Streams release artifacts include sample YAML files to help you deploy the components of AMQ Streams to OpenShift, perform common operations, and configure your Kafka cluster.
Use oc
to deploy the Cluster Operator from the install/cluster-operator
folder of the downloaded ZIP file. For more information about deploying and configuring the Cluster Operator, see Section 6.2, “Deploying the Cluster Operator”.
In addition, if you want to use standalone installations of the Topic and User Operators with a Kafka cluster that is not managed by the AMQ Streams Cluster Operator, you can deploy them from the install/topic-operator
and install/user-operator
folders.
AMQ Streams container images are also available through the Red Hat Ecosystem Catalog. However, we recommend that you use the YAML files provided to deploy AMQ Streams.
4.3. Pushing container images to your own registry
Container images for AMQ Streams are available in the Red Hat Ecosystem Catalog. The installation YAML files provided by AMQ Streams will pull the images directly from the Red Hat Ecosystem Catalog.
If you do not have access to the Red Hat Ecosystem Catalog or want to use your own container repository, do the following:
- Pull all container images listed here
- Push them into your own registry
- Update the image names in the installation YAML files
Each Kafka version supported for the release has a separate image.
Container image | Namespace/Repository | Description |
---|---|---|
Kafka |
| AMQ Streams image for running Kafka, including:
|
Operator |
| AMQ Streams image for running the operators:
|
Kafka Bridge |
| AMQ Streams image for running the AMQ Streams Kafka Bridge |
AMQ Streams Drain Cleaner |
| AMQ Streams image for running the AMQ Streams Drain Cleaner |
4.4. Creating a pull secret for authentication to the container image registry
The installation YAML files provided by AMQ Streams pull container images directly from the Red Hat Ecosystem Catalog. If an AMQ Streams deployment requires authentication, configure authentication credentials in a secret and add it to the installation YAML.
Authentication is not usually required, but might be requested on certain platforms.
Prerequisites
- You need your Red Hat username and password or the login details from your Red Hat registry service account.
You can use your Red Hat subscription to create a registry service account from the Red Hat Customer Portal.
Procedure
Create a pull secret containing your login details and the container registry where the AMQ Streams image is pulled from:
oc create secret docker-registry <pull_secret_name> \ --docker-server=registry.redhat.io \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email>
Add your user name and password. The email address is optional.
Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
deployment file to specify the pull secret using theSTRIMZI_IMAGE_PULL_SECRET
environment variable:apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: - name: STRIMZI_IMAGE_PULL_SECRETS value: "<pull_secret_name>" # ...
The secret applies to all pods created by the Cluster Operator.
4.5. Designating AMQ Streams administrators
AMQ Streams provides custom resources for configuration of your deployment. By default, permission to view, create, edit, and delete these resources is limited to OpenShift cluster administrators. AMQ Streams provides two cluster roles that you can use to assign these rights to other users:
-
strimzi-view
allows users to view and list AMQ Streams resources. -
strimzi-admin
allows users to also create, edit or delete AMQ Streams resources.
When you install these roles, they will automatically aggregate (add) these rights to the default OpenShift cluster roles. strimzi-view
aggregates to the view
role, and strimzi-admin
aggregates to the edit
and admin
roles. Because of the aggregation, you might not need to assign these roles to users who already have similar rights.
The following procedure shows how to assign a strimzi-admin
role that allows non-cluster administrators to manage AMQ Streams resources.
A system administrator can designate AMQ Streams administrators after the Cluster Operator is deployed.
Prerequisites
- The AMQ Streams Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been deployed with the Cluster Operator.
Procedure
Create the
strimzi-view
andstrimzi-admin
cluster roles in OpenShift.oc create -f install/strimzi-admin
If needed, assign the roles that provide access rights to users that require them.
oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user=user1 --user=user2
Chapter 5. Installing AMQ Streams from the OperatorHub using the web console
Install the AMQ Streams operator from the OperatorHub in the OpenShift Container Platform web console.
The procedures in this section show how to:
5.1. Installing the AMQ Streams operator from the OperatorHub
You can install and subscribe to the AMQ Streams operator using the OperatorHub in the OpenShift Container Platform web console.
This procedure describes how to create a project and install the AMQ Streams operator to that project. A project is a representation of a namespace. For manageability, it is a good practice to use namespaces to separate functions.
Make sure you use the appropriate update channel. If you are on a supported version of OpenShift, installing AMQ Streams from the default stable channel is generally safe. However, we do not recommend enabling automatic updates on the stable channel. An automatic upgrade will skip any necessary steps prior to upgrade. Use automatic upgrades only on version-specific channels.
Prerequisites
-
Access to an OpenShift Container Platform web console using an account with
cluster-admin
orstrimzi-admin
permissions.
Procedure
Navigate in the OpenShift web console to the Home > Projects page and create a project (namespace) for the installation.
We use a project named
amq-streams-kafka
in this example.- Navigate to the Operators > OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the AMQ Streams operator.
The operator is located in the Streaming & Messaging category.
- Click AMQ Streams to display the operator information.
- Read the information about the operator and click Install.
On the Install Operator page, choose from the following installation and update options:
Update Channel: Choose the update channel for the operator.
- The (default) stable channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable.
- An amq-streams-X.x channel contains the minor and micro release updates for a major release, where X is the major release version number.
- An amq-streams-X.Y.x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number.
Installation Mode: Choose the project you created to install the operator on a specific namespace.
You can install the AMQ Streams operator to all namespaces in the cluster (the default option) or a specific namespace. We recommend that you dedicate a specific namespace to the Kafka cluster and other AMQ Streams components.
- Update approval: By default, the AMQ Streams operator is automatically upgraded to the latest AMQ Streams version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information on operators, see the OpenShift documentation.
Click Install to install the operator to your selected namespace.
The AMQ Streams operator deploys the Cluster Operator, CRDs, and role-based access control (RBAC) resources to the selected namespace.
After the operator is ready for use, navigate to Operators > Installed Operators to verify that the operator has installed to the selected namespace.
The status will show as Succeeded.
You can now use the AMQ Streams operator to deploy Kafka components, starting with a Kafka cluster.
If you navigate to Workloads > Deployments, you can see the deployment details for the Cluster Operator and Entity Operator. The name of the Cluster Operator includes a version number: amq-streams-cluster-operator-<version>
. The name is different when deploying the Cluster Operator using the AMQ Streams installation artifacts. In this case, the name is strimzi-cluster-operator
.
5.2. Deploying Kafka components using the AMQ Streams operator
When installed on Openshift, the AMQ Streams operator makes Kafka components available for installation from the user interface.
The following Kafka components are available for installation:
- Kafka
- Kafka Connect
- Kafka MirrorMaker
- Kafka MirrorMaker 2
- Kafka Topic
- Kafka User
- Kafka Bridge
- Kafka Connector
- Kafka Rebalance
You select the component and create an instance. As a minimum, you create a Kafka instance. This procedure describes how to create a Kafka instance using the default settings. You can configure the default installation specification before you perform the installation.
The process is the same for creating instances of other Kafka components.
Prerequisites
- The AMQ Streams operator is installed on the OpenShift cluster.
Procedure
Navigate in the web console to the Operators > Installed Operators page and click AMQ Streams to display the operator details.
From Provided APIs, you can create instances of Kafka components.
Click Create instance under Kafka to create a Kafka instance.
By default, you’ll create a Kafka cluster called
my-cluster
with three Kafka broker nodes and three ZooKeeper nodes. The cluster uses ephemeral storage.Click Create to start the installation of Kafka.
Wait until the status changes to Ready.
Chapter 6. Deploying AMQ Streams using installation artifacts
Having prepared your environment for a deployment of AMQ Streams, you can deploy AMQ Streams to an OpenShift cluster. Use the installation files provided with the release artifacts.
AMQ Streams is based on Strimzi 0.36.x. You can deploy AMQ Streams 2.5 on OpenShift 4.10 to 4.14.
The steps to deploy AMQ Streams using the installation files are as follows:
- Deploy the Cluster Operator
Use the Cluster Operator to deploy the following:
Optionally, deploy the following Kafka components according to your requirements:
To run the commands in this guide, an OpenShift user must have the rights to manage role-based access control (RBAC) and CRDs.
6.1. Basic deployment path
You can set up a deployment where AMQ Streams manages a single Kafka cluster in the same namespace. You might use this configuration for development or testing. Or you can use AMQ Streams in a production environment to manage a number of Kafka clusters in different namespaces.
The first step for any deployment of AMQ Streams is to install the Cluster Operator using the install/cluster-operator
files.
A single command applies all the installation files in the cluster-operator
folder: oc apply -f ./install/cluster-operator
.
The command sets up everything you need to be able to create and manage a Kafka deployment, including the following:
-
Cluster Operator (
Deployment
,ConfigMap
) -
AMQ Streams CRDs (
CustomResourceDefinition
) -
RBAC resources (
ClusterRole
,ClusterRoleBinding
,RoleBinding
) -
Service account (
ServiceAccount
)
The basic deployment path is as follows:
- Download the release artifacts
- Create an OpenShift namespace in which to deploy the Cluster Operator
-
Update the
install/cluster-operator
files to use the namespace created for the Cluster Operator - Install the Cluster Operator to watch one, multiple, or all namespaces
-
Update the
- Create a Kafka cluster
After which, you can deploy other Kafka components and set up monitoring of your deployment.
6.2. Deploying the Cluster Operator
The Cluster Operator is responsible for deploying and managing Kafka clusters within an OpenShift cluster.
When the Cluster Operator is running, it starts to watch for updates of Kafka resources.
By default, a single replica of the Cluster Operator is deployed. You can add replicas with leader election so that additional Cluster Operators are on standby in case of disruption. For more information, see Section 8.5.3, “Running multiple Cluster Operator replicas with leader election”.
6.2.1. Specifying the namespaces the Cluster Operator watches
The Cluster Operator watches for updates in the namespaces where the Kafka resources are deployed. When you deploy the Cluster Operator, you specify which namespaces to watch in the OpenShift cluster. You can specify the following namespaces:
- A single selected namespace (the same namespace containing the Cluster Operator)
- Multiple selected namespaces
- All namespaces in the cluster
Watching multiple selected namespaces has the most impact on performance due to increased processing overhead. To optimize performance for namespace monitoring, it is generally recommended to either watch a single namespace or monitor the entire cluster. Watching a single namespace allows for focused monitoring of namespace-specific resources, while monitoring all namespaces provides a comprehensive view of the cluster’s resources across all namespaces.
The Cluster Operator watches for changes to the following resources:
-
Kafka
for the Kafka cluster. -
KafkaConnect
for the Kafka Connect cluster. -
KafkaConnector
for creating and managing connectors in a Kafka Connect cluster. -
KafkaMirrorMaker
for the Kafka MirrorMaker instance. -
KafkaMirrorMaker2
for the Kafka MirrorMaker 2 instance. -
KafkaBridge
for the Kafka Bridge instance. -
KafkaRebalance
for the Cruise Control optimization requests.
When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as Deployments, Pods, Services and ConfigMaps.
Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource.
Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption.
When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources.
While the Cluster Operator can watch one, multiple, or all namespaces in an OpenShift cluster, the Topic Operator and User Operator watch for KafkaTopic
and KafkaUser
resources in a single namespace. For more information, see Section 1.2.1, “Watching AMQ Streams resources in OpenShift namespaces”.
6.2.2. Deploying the Cluster Operator to watch a single namespace
This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources in a single namespace in your OpenShift cluster.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Deploy the Cluster Operator:
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.2.3. Deploying the Cluster Operator to watch multiple namespaces
This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across multiple namespaces in your OpenShift cluster.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to add a list of all the namespaces the Cluster Operator will watch to theSTRIMZI_NAMESPACE
environment variable.For example, in this procedure the Cluster Operator will watch the namespaces
watched-namespace-1
,watched-namespace-2
,watched-namespace-3
.apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.1 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3
For each namespace listed, install the
RoleBindings
.In this example, we replace
watched-namespace
in these commands with the namespaces listed in the previous step, repeating them forwatched-namespace-1
,watched-namespace-2
,watched-namespace-3
:oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>
Deploy the Cluster Operator:
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.2.4. Deploying the Cluster Operator to watch all namespaces
This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across all namespaces in your OpenShift cluster.
When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to set the value of theSTRIMZI_NAMESPACE
environment variable to*
.apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: # ... serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.1 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: "*" # ...
Create
ClusterRoleBindings
that grant cluster-wide access for all namespaces to the Cluster Operator.oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator
Deploy the Cluster Operator to your OpenShift cluster.
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.3. Deploying Kafka
To be able to manage a Kafka cluster with the Cluster Operator, you must deploy it as a Kafka
resource. AMQ Streams provides example deployment files to do this. You can use these files to deploy the Topic Operator and User Operator at the same time.
After you have deployed the Cluster Operator, use a Kafka
resource to deploy the following components:
When installing Kafka, AMQ Streams also installs a ZooKeeper cluster and adds the necessary configuration to connect Kafka with ZooKeeper.
If you are trying the preview of the node pools feature, you can deploy a Kafka cluster with one or more node pools. Node pools provide configuration for a set of Kafka nodes. By using node pools, nodes can have different configuration within the same Kafka cluster.
Node pools are not enabled by default, so you must enable the KafkaNodePools
feature gate before using them.
If you haven’t deployed a Kafka cluster as a Kafka
resource, you can’t use the Cluster Operator to manage it. This applies, for example, to a Kafka cluster running outside of OpenShift. However, you can use the Topic Operator and User Operator with a Kafka cluster that is not managed by AMQ Streams, by deploying them as standalone components. You can also deploy and use other Kafka components with a Kafka cluster not managed by AMQ Streams.
6.3.1. Deploying the Kafka cluster
This procedure shows how to deploy a Kafka cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a Kafka
resource.
AMQ Streams provides the following example files you can use to create a Kafka cluster:
kafka-persistent.yaml
- Deploys a persistent cluster with three ZooKeeper and three Kafka nodes.
kafka-jbod.yaml
- Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes).
kafka-persistent-single.yaml
- Deploys a persistent cluster with a single ZooKeeper node and a single Kafka node.
kafka-ephemeral.yaml
- Deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes.
kafka-ephemeral-single.yaml
- Deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node.
In this procedure, we use the examples for an ephemeral and persistent Kafka cluster deployment.
- Ephemeral cluster
-
In general, an ephemeral (or temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses
emptyDir
volumes for storing broker information (for ZooKeeper) and topics or partitions (for Kafka). Using anemptyDir
volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down. - Persistent cluster
A persistent Kafka cluster uses persistent volumes to store ZooKeeper and Kafka data. A
PersistentVolume
is acquired using aPersistentVolumeClaim
to make it independent of the actual type of thePersistentVolume
. ThePersistentVolumeClaim
can use aStorageClass
to trigger automatic volume provisioning. When noStorageClass
is specified, OpenShift will try to use the defaultStorageClass
.The following examples show some common types of persistent volumes:
- If your OpenShift cluster runs on Amazon AWS, OpenShift can provision Amazon EBS volumes
- If your OpenShift cluster runs on Microsoft Azure, OpenShift can provision Azure Disk Storage volumes
- If your OpenShift cluster runs on Google Cloud, OpenShift can provision Persistent Disk volumes
- If your OpenShift cluster runs on bare metal, OpenShift can provision local persistent volumes
The example YAML files specify the latest supported Kafka version, and configuration for its supported log message format version and inter-broker protocol version. The inter.broker.protocol.version
property for the Kafka config
must be the version supported by the specified Kafka version (spec.kafka.version
). The property represents the version of Kafka protocol used in a Kafka cluster.
From Kafka 3.0.0, when the inter.broker.protocol.version
is set to 3.0
or higher, the log.message.format.version
option is ignored and doesn’t need to be set.
An update to the inter.broker.protocol.version
is required when upgrading Kafka.
The example clusters are named my-cluster
by default. The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name
property of the Kafka
resource in the relevant YAML file.
Default cluster name and specified Kafka versions
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.5.0 #... config: #... log.message.format.version: "3.5" inter.broker.protocol.version: "3.5" # ...
Prerequisites
Procedure
Create and deploy an ephemeral or persistent cluster.
To create and deploy an ephemeral cluster:
oc apply -f examples/kafka/kafka-ephemeral.yaml
To create and deploy a persistent cluster:
oc apply -f examples/kafka/kafka-persistent.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the pod names and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0
my-cluster
is the name of the Kafka cluster.A sequential index number starting with
0
identifies each Kafka and ZooKeeper pod created.With the default deployment, you create an Entity Operator cluster, 3 Kafka pods, and 3 ZooKeeper pods.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
Additional resources
6.3.2. (Preview) Deploying Kafka node pools
This procedure shows how to deploy Kafka node pools to your OpenShift cluster using the Cluster Operator. Node pools represent a distinct group of Kafka nodes within a Kafka cluster that share the same configuration. For each Kafka node in the node pool, any configuration not defined in node pool is inherited from the cluster configuration in the kafka
resource.
The node pools feature is available as a preview. Node pools are not enabled by default, so you must enable the KafkaNodePools
feature gate before using them.
The deployment uses a YAML file to provide the specification to create a KafkaNodePool
resource. You can use node pools with Kafka clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper for cluster management.
KRaft mode is not ready for production in Apache Kafka or in AMQ Streams.
AMQ Streams provides the following example files that you can use to create a Kafka node pool:
kafka.yaml
- Deploys ZooKeeper with 3 nodes, and 2 different pools of Kafka brokers. Each of the pools has 3 brokers. The pools in the example use different storage configuration.
kafka-with-dual-role-kraft-nodes.yaml
- Deploys a Kafka cluster with one pool of KRaft nodes that share the broker and controller roles.
kafka-with-kraft.yaml
- Deploys a Kafka cluster with one pool of controller nodes and one pool of broker nodes.
You don’t need to start using node pools right away. If you decide to use them, you can perform the steps outlined here to deploy a new Kafka cluster with KafkaNodePool
resources or migrate your existing Kafka cluster.
Prerequisites
If you want to migrate an existing Kafka cluster to use node pools, see the steps to migrate existing Kafka clusters.
Procedure
Enable the
KafkaNodePools
feature gate from the command line:oc set env deployment/strimzi-cluster-operator STRIMZI_FEATURE_GATES="+KafkaNodePools"
Or by editing the Cluster Operator Deployment and updating the
STRIMZI_FEATURE_GATES
environment variable:env - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools
This updates the Cluster Operator.
If using KRaft mode, enable the
UseKRaft
feature gate as well.Create a node pool.
To deploy a Kafka cluster and ZooKeeper cluster with two node pools of three brokers:
oc apply -f examples/kafka/nodepools/kafka.yaml
To deploy a Kafka cluster in KRaft mode with a single node pool that uses dual-role nodes:
oc apply -f examples/kafka/nodepools/kafka-with-dual-role-kraft-nodes.yaml
To deploy a Kafka cluster in KRaft mode with separate node pools for broker and controller nodes:
oc apply -f examples/kafka/nodepools/kafka-with-kraft.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the node pool names and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-4 1/1 Running 0
-
my-cluster
is the name of the Kafka cluster. pool-a
is the name of the node pool.A sequential index number starting with
0
identifies each Kafka pod created. If you are using ZooKeeper, you’ll also see the ZooKeeper pods.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.Information on the deployment is also shown in the status of the
KafkaNodePool
resource, including a list of IDs for nodes in the pool.NoteNode IDs are assigned sequentially starting at 0 (zero) across all node pools within a cluster. This means that node IDs might not run sequentially within a specific node pool. If there are gaps in the sequence of node IDs across the cluster, the next node to be added is assigned an ID that fills the gap. When scaling down, the node with the highest node ID within a pool is removed.
-
Additional resources
6.3.3. Deploying the Topic Operator using the Cluster Operator
This procedure describes how to deploy the Topic Operator using the Cluster Operator. The Topic Operator can be deployed for use in either bidirectional mode or unidirectional mode. To learn more about bidirectional and unidirectional topic management, see Section 9.1, “Topic management modes”.
Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator
feature gate to be able to use it.
You configure the entityOperator
property of the Kafka
resource to include the topicOperator
. By default, the Topic Operator watches for KafkaTopic
resources in the namespace of the Kafka cluster deployed by the Cluster Operator. You can also specify a namespace using watchedNamespace
in the Topic Operator spec
. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator.
If you use AMQ Streams to deploy multiple Kafka clusters into the same namespace, enable the Topic Operator for only one Kafka cluster or use the watchedNamespace
property to configure the Topic Operators to watch other namespaces.
If you want to use the Topic Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the Topic Operator as a standalone component.
For more information about configuring the entityOperator
and topicOperator
properties, see Configuring the Entity Operator.
Prerequisites
Procedure
Edit the
entityOperator
properties of theKafka
resource to includetopicOperator
:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {}
Configure the Topic Operator
spec
using the properties described in theEntityTopicOperatorSpec
schema reference.Use an empty object (
{}
) if you want all properties to use their default values.Create or update the resource:
oc apply -f <kafka_configuration_file>
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the pod name and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ...
my-cluster
is the name of the Kafka cluster.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
6.3.4. Deploying the User Operator using the Cluster Operator
This procedure describes how to deploy the User Operator using the Cluster Operator.
You configure the entityOperator
property of the Kafka
resource to include the userOperator
. By default, the User Operator watches for KafkaUser
resources in the namespace of the Kafka cluster deployment. You can also specify a namespace using watchedNamespace
in the User Operator spec
. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator.
If you want to use the User Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the User Operator as a standalone component.
For more information about configuring the entityOperator
and userOperator
properties, see Configuring the Entity Operator.
Prerequisites
Procedure
Edit the
entityOperator
properties of theKafka
resource to includeuserOperator
:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {}
Configure the User Operator
spec
using the properties described inEntityUserOperatorSpec
schema reference.Use an empty object (
{}
) if you want all properties to use their default values.Create or update the resource:
oc apply -f <kafka_configuration_file>
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the pod name and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ...
my-cluster
is the name of the Kafka cluster.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
6.3.5. List of Kafka cluster resources
The following resources are created by the Cluster Operator in the OpenShift cluster:
Shared resources
cluster-name-cluster-ca
- Secret with the Cluster CA private key used to encrypt the cluster communication.
cluster-name-cluster-ca-cert
- Secret with the Cluster CA public key. This key can be used to verify the identity of the Kafka brokers.
cluster-name-clients-ca
- Secret with the Clients CA private key used to sign user certificates
cluster-name-clients-ca-cert
- Secret with the Clients CA public key. This key can be used to verify the identity of the Kafka users.
cluster-name-cluster-operator-certs
- Secret with Cluster operators keys for communication with Kafka and ZooKeeper.
ZooKeeper nodes
cluster-name-zookeeper
Name given to the following ZooKeeper resources:
- StrimziPodSet for managing the ZooKeeper node pods.
- Service account used by the ZooKeeper nodes.
- PodDisruptionBudget configured for the ZooKeeper nodes.
cluster-name-zookeeper-idx
- Pods created by the StrimziPodSet.
cluster-name-zookeeper-nodes
- Headless Service needed to have DNS resolve the ZooKeeper pods IP addresses directly.
cluster-name-zookeeper-client
- Service used by Kafka brokers to connect to ZooKeeper nodes as clients.
cluster-name-zookeeper-config
- ConfigMap that contains the ZooKeeper ancillary configuration, and is mounted as a volume by the ZooKeeper node pods.
cluster-name-zookeeper-nodes
- Secret with ZooKeeper node keys.
cluster-name-network-policy-zookeeper
- Network policy managing access to the ZooKeeper services.
data-cluster-name-zookeeper-idx
-
Persistent Volume Claim for the volume used for storing data for the ZooKeeper node pod
idx
. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data.
Kafka brokers
cluster-name-kafka
Name given to the following Kafka resources:
- StrimziPodSet for managing the Kafka broker pods.
- Service account used by the Kafka pods.
- PodDisruptionBudget configured for the Kafka brokers.
cluster-name-kafka-idx
Name given to the following Kafka resources:
- Pods created by the StrimziPodSet.
- ConfigMaps with Kafka broker configuration.
cluster-name-kafka-brokers
- Service needed to have DNS resolve the Kafka broker pods IP addresses directly.
cluster-name-kafka-bootstrap
- Service can be used as bootstrap servers for Kafka clients connecting from within the OpenShift cluster.
cluster-name-kafka-external-bootstrap
-
Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is
external
and port is9094
. cluster-name-kafka-pod-id
-
Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is
external
and port is9094
. cluster-name-kafka-external-bootstrap
-
Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type
route
. The old route name will be used for backwards compatibility when the listener name isexternal
and port is9094
. cluster-name-kafka-pod-id
-
Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type
route
. The old route name will be used for backwards compatibility when the listener name isexternal
and port is9094
. cluster-name-kafka-listener-name-bootstrap
- Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners.
cluster-name-kafka-listener-name-pod-id
- Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners.
cluster-name-kafka-listener-name-bootstrap
-
Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type
route
. The new route name will be used for all other external listeners. cluster-name-kafka-listener-name-pod-id
-
Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type
route
. The new route name will be used for all other external listeners. cluster-name-kafka-config
-
ConfigMap containing the Kafka ancillary configuration, which is mounted as a volume by the broker pods when the
UseStrimziPodSets
feature gate is disabled. cluster-name-kafka-brokers
- Secret with Kafka broker keys.
cluster-name-network-policy-kafka
- Network policy managing access to the Kafka services.
strimzi-namespace-name-cluster-name-kafka-init
- Cluster role binding used by the Kafka brokers.
cluster-name-jmx
- Secret with JMX username and password used to secure the Kafka broker port. This resource is created only when JMX is enabled in Kafka.
data-cluster-name-kafka-idx
-
Persistent Volume Claim for the volume used for storing data for the Kafka broker pod
idx
. This resource is created only if persistent storage is selected for provisioning persistent volumes to store data. data-id-cluster-name-kafka-idx
-
Persistent Volume Claim for the volume
id
used for storing data for the Kafka broker podidx
. This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data.
Entity Operator
These resources are only created if the Entity Operator is deployed using the Cluster Operator.
cluster-name-entity-operator
Name given to the following Entity Operator resources:
- Deployment with Topic and User Operators.
- Service account used by the Entity Operator.
- Network policy managing access to the Entity Operator metrics.
cluster-name-entity-operator-random-string
- Pod created by the Entity Operator deployment.
cluster-name-entity-topic-operator-config
- ConfigMap with ancillary configuration for Topic Operators.
cluster-name-entity-user-operator-config
- ConfigMap with ancillary configuration for User Operators.
cluster-name-entity-topic-operator-certs
- Secret with Topic Operator keys for communication with Kafka and ZooKeeper.
cluster-name-entity-user-operator-certs
- Secret with User Operator keys for communication with Kafka and ZooKeeper.
strimzi-cluster-name-entity-topic-operator
- Role binding used by the Entity Topic Operator.
strimzi-cluster-name-entity-user-operator
- Role binding used by the Entity User Operator.
Kafka Exporter
These resources are only created if the Kafka Exporter is deployed using the Cluster Operator.
cluster-name-kafka-exporter
Name given to the following Kafka Exporter resources:
- Deployment with Kafka Exporter.
- Service used to collect consumer lag metrics.
- Service account used by the Kafka Exporter.
- Network policy managing access to the Kafka Exporter metrics.
cluster-name-kafka-exporter-random-string
- Pod created by the Kafka Exporter deployment.
Cruise Control
These resources are only created if Cruise Control was deployed using the Cluster Operator.
cluster-name-cruise-control
Name given to the following Cruise Control resources:
- Deployment with Cruise Control.
- Service used to communicate with Cruise Control.
- Service account used by the Cruise Control.
cluster-name-cruise-control-random-string
- Pod created by the Cruise Control deployment.
cluster-name-cruise-control-config
- ConfigMap that contains the Cruise Control ancillary configuration, and is mounted as a volume by the Cruise Control pods.
cluster-name-cruise-control-certs
- Secret with Cruise Control keys for communication with Kafka and ZooKeeper.
cluster-name-network-policy-cruise-control
- Network policy managing access to the Cruise Control service.
6.4. Deploying Kafka Connect
Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database or messaging system, for import or export of data using connectors. Connectors are plugins that provide the connection configuration needed.
In AMQ Streams, Kafka Connect is deployed in distributed mode. Kafka Connect can also work in standalone mode, but this is not supported by AMQ Streams.
Using the concept of connectors, Kafka Connect provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability.
The Cluster Operator manages Kafka Connect clusters deployed using the KafkaConnect
resource and connectors created using the KafkaConnector
resource.
In order to use Kafka Connect, you need to do the following.
The term connector is used interchangeably to mean a connector instance running within a Kafka Connect cluster, or a connector class. In this guide, the term connector is used when the meaning is clear from the context.
6.4.1. Deploying Kafka Connect to your OpenShift cluster
This procedure shows how to deploy a Kafka Connect cluster to your OpenShift cluster using the Cluster Operator.
A Kafka Connect cluster deployment is implemented with a configurable number of nodes (also called workers) that distribute the workload of connectors as tasks so that the message flow is highly scalable and reliable.
The deployment uses a YAML file to provide the specification to create a KafkaConnect
resource.
AMQ Streams provides example configuration files. In this procedure, we use the following example file:
-
examples/connect/kafka-connect.yaml
Prerequisites
Procedure
Deploy Kafka Connect to your OpenShift cluster. Use the
examples/connect/kafka-connect.yaml
file to deploy Kafka Connect.oc apply -f examples/connect/kafka-connect.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0
my-connect-cluster
is the name of the Kafka Connect cluster.A pod ID identifies each pod created.
With the default deployment, you create a single Kafka Connect pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
Additional resources
6.4.2. Configuring Kafka Connect for multiple instances
If you are running multiple instances of Kafka Connect, you have to change the default configuration of the following config
properties:
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: connect-cluster 1 offset.storage.topic: connect-cluster-offsets 2 config.storage.topic: connect-cluster-configs 3 status.storage.topic: connect-cluster-status 4 # ... # ...
Values for the three topics must be the same for all Kafka Connect instances with the same group.id
.
Unless you change the default settings, each Kafka Connect instance connecting to the same Kafka cluster is deployed with the same values. What happens, in effect, is all instances are coupled to run in a cluster and use the same topics.
If multiple Kafka Connect clusters try to use the same topics, Kafka Connect will not work as expected and generate errors.
If you wish to run multiple Kafka Connect instances, change the values of these properties for each instance.
6.4.3. Adding connectors
Kafka Connect uses connectors to integrate with other systems to stream data. A connector is an instance of a Kafka Connector
class, which can be one of the following type:
- Source connector
- A source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages.
- Sink connector
- A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system.
Kafka Connect uses a plugin architecture to provide the implementation artifacts for connectors. Plugins allow connections to other systems and provide additional configuration to manipulate data. Plugins include connectors and other components, such as data converters and transforms. A connector operates with a specific type of external system. Each connector defines a schema for its configuration. You supply the configuration to Kafka Connect to create a connector instance within Kafka Connect. Connector instances then define a set of tasks for moving data between systems.
Add connector plugins to Kafka Connect in one of the following ways:
- Configure Kafka Connect to build a new container image with plugins automatically
- Create a Docker image from the base Kafka Connect image (manually or using continuous integration)
After plugins have been added to the container image, you can start, stop, and manage connector instances in the following ways:
You can also create new connector instances using these options.
6.4.3.1. Building a new container image with connector plugins automatically
Configure Kafka Connect so that AMQ Streams automatically builds a new container image with additional connectors. You define the connector plugins using the .spec.build.plugins
property of the KafkaConnect
custom resource. AMQ Streams will automatically download and add the connector plugins into a new container image. The container is pushed into the container repository specified in .spec.build.output
and automatically used in the Kafka Connect deployment.
Prerequisites
- The Cluster Operator must be deployed.
- A container registry.
You need to provide your own container registry where images can be pushed to, stored, and pulled from. AMQ Streams supports private container registries as well as public registries such as Quay or Docker Hub.
Procedure
Configure the
KafkaConnect
custom resource by specifying the container registry in.spec.build.output
, and additional connectors in.spec.build.plugins
:apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.1.3.Final/debezium-connector-postgres-2.1.3.Final-plugin.tar.gz sha512sum: c4ddc97846de561755dc0b021a62aba656098829c70eb3ade3b817ce06d852ca12ae50c0281cc791a5a131cb7fc21fb15f4b8ee76c6cae5dd07f9c11cb7c6e79 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.11.5/camel-telegram-kafka-connector-0.11.5-package.tar.gz sha512sum: d6d9f45e0d1dbfcc9f6d1c7ca2046168c764389c78bc4b867dab32d24f710bb74ccf2a007d7d7a8af2dfca09d9a52ccbc2831fc715c195a3634cca055185bd91 #...
Create or update the resource:
$ oc apply -f <kafka_connect_configuration_file>
- Wait for the new container image to build, and for the Kafka Connect cluster to be deployed.
-
Use the Kafka Connect REST API or
KafkaConnector
custom resources to use the connector plugins you added.
Additional resources
6.4.3.2. Building a new container image with connector plugins from the Kafka Connect base image
Create a custom Docker image with connector plugins from the Kafka Connect base image Add the custom image to the /opt/kafka/plugins
directory.
You can use the Kafka container image on Red Hat Ecosystem Catalog as a base image for creating your own custom image with additional connector plugins.
At startup, the AMQ Streams version of Kafka Connect loads any third-party connector plugins contained in the /opt/kafka/plugins
directory.
Prerequisites
Procedure
Create a new
Dockerfile
usingregistry.redhat.io/amq-streams/kafka-35-rhel8:2.5.1
as the base image:FROM registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.1 USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ USER 1001
Example plugins file
$ tree ./my-plugins/ ./my-plugins/ ├── debezium-connector-mongodb │ ├── bson-<version>.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mongodb-driver-core-<version>.jar │ ├── README.md │ └── # ... ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-<version>.jar │ ├── mysql-connector-java-<version>.jar │ ├── README.md │ └── # ... └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-<version>.jar ├── debezium-core-<version>.jar ├── LICENSE.txt ├── postgresql-<version>.jar ├── protobuf-java-<version>.jar ├── README.md └── # ...
The COPY command points to the plugin files to copy to the container image.
This example adds plugins for Debezium connectors (MongoDB, MySQL, and PostgreSQL), though not all files are listed for brevity. Debezium running in Kafka Connect looks the same as any other Kafka Connect task.
- Build the container image.
- Push your custom image to your container registry.
Point to the new container image.
You can point to the image in one of the following ways:
Edit the
KafkaConnect.spec.image
property of theKafkaConnect
custom resource.If set, this property overrides the
STRIMZI_KAFKA_CONNECT_IMAGES
environment variable in the Cluster Operator.apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... image: my-new-container-image 2 config: 3 #...
-
Edit the
STRIMZI_KAFKA_CONNECT_IMAGES
environment variable in theinstall/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to point to the new container image, and then reinstall the Cluster Operator.
6.4.3.3. Deploying KafkaConnector resources
Deploy KafkaConnector
resources to manage connectors. The KafkaConnector
custom resource offers an OpenShift-native approach to management of connectors by the Cluster Operator. You don’t need to send HTTP requests to manage connectors, as with the Kafka Connect REST API. You manage a running connector instance by updating its corresponding KafkaConnector
resource, and then applying the updates. The Cluster Operator updates the configurations of the running connector instances. You remove a connector by deleting its corresponding KafkaConnector
.
KafkaConnector
resources must be deployed to the same namespace as the Kafka Connect cluster they link to.
In the configuration shown in this procedure, the autoRestart
property is set to true
. This enables automatic restarts of failed connectors and tasks. Up to seven restart attempts are made, after which restarts must be made manually. You annotate the KafkaConnector
resource to restart a connector or restart a connector task manually.
Example connectors
You can use your own connectors or try the examples provided by AMQ Streams. Up until Apache Kafka 3.1.0, example file connector plugins were included with Apache Kafka. Starting from the 3.1.1 and 3.2.0 releases of Apache Kafka, the examples need to be added to the plugin path as any other connector.
AMQ Streams provides an example KafkaConnector
configuration file (examples/connect/source-connector.yaml
) for the example file connector plugins, which creates the following connector instances as KafkaConnector
resources:
-
A
FileStreamSourceConnector
instance that reads each line from the Kafka license file (the source) and writes the data as messages to a single Kafka topic. -
A
FileStreamSinkConnector
instance that reads messages from the Kafka topic and writes the messages to a temporary file (the sink).
We use the example file to create connectors in this procedure.
The example connectors are not intended for use in a production environment.
Prerequisites
- A Kafka Connect deployment
- The Cluster Operator is running
Procedure
Add the
FileStreamSourceConnector
andFileStreamSinkConnector
plugins to Kafka Connect in one of the following ways:- Configure Kafka Connect to build a new container image with plugins automatically
- Create a Docker image from the base Kafka Connect image (manually or using continuous integration)
Set the
strimzi.io/use-connector-resources annotation
totrue
in the Kafka Connect configuration.apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ...
With the
KafkaConnector
resources enabled, the Cluster Operator watches for them.Edit the
examples/connect/source-connector.yaml
file:apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: "/opt/kafka/LICENSE" 7 topic: my-topic 8 # ...
- 1
- Name of the
KafkaConnector
resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. - 2
- Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to.
- 3
- Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster.
- 4
- Maximum number of Kafka Connect tasks that the connector can create.
- 5
- Enables automatic restarts of failed connectors and tasks.
- 6
- Connector configuration as key-value pairs.
- 7
- This example source connector configuration reads data from the
/opt/kafka/LICENSE
file. - 8
- Kafka topic to publish the source data to.
Create the source
KafkaConnector
in your OpenShift cluster:oc apply -f examples/connect/source-connector.yaml
Create an
examples/connect/sink-connector.yaml
file:touch examples/connect/sink-connector.yaml
Paste the following YAML into the
sink-connector.yaml
file:apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: "/tmp/my-file" 3 topics: my-topic 4
- 1
- Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster.
- 2
- Connector configuration as key-value pairs.
- 3
- Temporary file to publish the source data to.
- 4
- Kafka topic to read the source data from.
Create the sink
KafkaConnector
in your OpenShift cluster:oc apply -f examples/connect/sink-connector.yaml
Check that the connector resources were created:
oc get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector
Replace <my_connect_cluster> with the name of your Kafka Connect cluster.
In the container, execute
kafka-console-consumer.sh
to read the messages that were written to the topic by the source connector:oc exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap.NAMESPACE.svc:9092 --topic my-topic --from-beginning
Replace <my_kafka_cluster> with the name of your Kafka cluster.
Source and sink connector configuration options
The connector configuration is defined in the spec.config
property of the KafkaConnector
resource.
The FileStreamSourceConnector
and FileStreamSinkConnector
classes support the same configuration options as the Kafka Connect REST API. Other connectors support different configuration options.
Table 6.1. Configuration options for the FileStreamSource
connector class
Name | Type | Default value | Description |
---|---|---|---|
| String | Null | Source file to write messages to. If not specified, the standard input is used. |
| List | Null | The Kafka topic to publish data to. |
Table 6.2. Configuration options for FileStreamSinkConnector
class
Name | Type | Default value | Description |
---|---|---|---|
| String | Null | Destination file to write messages to. If not specified, the standard output is used. |
| List | Null | One or more Kafka topics to read data from. |
| String | Null | A regular expression matching one or more Kafka topics to read data from. |
6.4.3.4. Manually restarting connectors
If you are using KafkaConnector
resources to manage connectors, use the restart
annotation to manually trigger a restart of a connector.
Prerequisites
- The Cluster Operator is running.
Procedure
Find the name of the
KafkaConnector
custom resource that controls the Kafka connector you want to restart:oc get KafkaConnector
Restart the connector by annotating the
KafkaConnector
resource in OpenShift.oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart=true
The
restart
annotation is set totrue
.Wait for the next reconciliation to occur (every two minutes by default).
The Kafka connector is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the
KafkaConnector
custom resource.
6.4.3.5. Manually restarting Kafka connector tasks
If you are using KafkaConnector
resources to manage connectors, use the restart-task
annotation to manually trigger a restart of a connector task.
Prerequisites
- The Cluster Operator is running.
Procedure
Find the name of the
KafkaConnector
custom resource that controls the Kafka connector task you want to restart:oc get KafkaConnector
Find the ID of the task to be restarted from the
KafkaConnector
custom resource. Task IDs are non-negative integers, starting from 0:oc describe KafkaConnector <kafka_connector_name>
Use the ID to restart the connector task by annotating the
KafkaConnector
resource in OpenShift:oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task=0
In this example, task
0
is restarted.Wait for the next reconciliation to occur (every two minutes by default).
The Kafka connector task is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the
KafkaConnector
custom resource.
6.4.3.6. Exposing the Kafka Connect API
Use the Kafka Connect REST API as an alternative to using KafkaConnector
resources to manage connectors. The Kafka Connect REST API is available as a service running on <connect_cluster_name>-connect-api:8083
, where <connect_cluster_name> is the name of your Kafka Connect cluster. The service is created when you create a Kafka Connect instance.
The operations supported by the Kafka Connect REST API are described in the Apache Kafka Connect API documentation.
The strimzi.io/use-connector-resources
annotation enables KafkaConnectors. If you applied the annotation to your KafkaConnect
resource configuration, you need to remove it to use the Kafka Connect API. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
You can add the connector configuration as a JSON object.
Example curl request to add connector configuration
curl -X POST \ http://my-connect-cluster-connect-api:8083/connectors \ -H 'Content-Type: application/json' \ -d '{ "name": "my-source-connector", "config": { "connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector", "file": "/opt/kafka/LICENSE", "topic":"my-topic", "tasksMax": "4", "type": "source" } }'
The API is only accessible within the OpenShift cluster. If you want to make the Kafka Connect API accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features:
-
LoadBalancer
orNodePort
type services -
Ingress
resources (Kubernetes only) - OpenShift routes (OpenShift only)
The connection is insecure, so allow external access advisedly.
If you decide to create services, use the labels from the selector
of the <connect_cluster_name>-connect-api
service to configure the pods to which the service will route the traffic:
Selector configuration for the service
# ... selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #...
You must also create a NetworkPolicy
that allows HTTP requests from external clients.
Example NetworkPolicy to allow requests to the Kafka Connect API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-custom-connect-network-policy
spec:
ingress:
- from:
- podSelector: 1
matchLabels:
app: my-connector-manager
ports:
- port: 8083
protocol: TCP
podSelector:
matchLabels:
strimzi.io/cluster: my-connect-cluster
strimzi.io/kind: KafkaConnect
strimzi.io/name: my-connect-cluster-connect
policyTypes:
- Ingress
- 1
- The label of the pod that is allowed to connect to the API.
To add the connector configuration outside the cluster, use the URL of the resource that exposes the API in the curl command.
6.4.3.7. Limiting access to the Kafka Connect API
It is crucial to restrict access to the Kafka Connect API only to trusted users to prevent unauthorized actions and potential security issues. The Kafka Connect API provides extensive capabilities for altering connector configurations, which makes it all the more important to take security precautions. Someone with access to the Kafka Connect API could potentially obtain sensitive information that an administrator may assume is secure.
The Kafka Connect REST API can be accessed by anyone who has authenticated access to the OpenShift cluster and knows the endpoint URL, which includes the hostname/IP address and port number.
For example, suppose an organization uses a Kafka Connect cluster and connectors to stream sensitive data from a customer database to a central database. The administrator uses a configuration provider plugin to store sensitive information related to connecting to the customer database and the central database, such as database connection details and authentication credentials. The configuration provider protects this sensitive information from being exposed to unauthorized users. However, someone who has access to the Kafka Connect API can still obtain access to the customer database without the consent of the administrator. They can do this by setting up a fake database and configuring a connector to connect to it. They then modify the connector configuration to point to the customer database, but instead of sending the data to the central database, they send it to the fake database. By configuring the connector to connect to the fake database, the login details and credentials for connecting to the customer database are intercepted, even though they are stored securely in the configuration provider.
If you are using the KafkaConnector
custom resources, then by default the OpenShift RBAC rules permit only OpenShift cluster administrators to make changes to connectors. You can also designate non-cluster administrators to manage AMQ Streams resources. With KafkaConnector
resources enabled in your Kafka Connect configuration, changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. If you are not using the KafkaConnector
resource, the default RBAC rules do not limit access to the Kafka Connect API. If you want to limit direct access to the Kafka Connect REST API using OpenShift RBAC, you need to enable and use the KafkaConnector
resources.
For improved security, we recommend configuring the following properties for the Kafka Connect API:
org.apache.kafka.disallowed.login.modules
(Kafka 3.4 or later) Set the
org.apache.kafka.disallowed.login.modules
Java system property to prevent the use of insecure login modules. For example, specifyingcom.sun.security.auth.module.JndiLoginModule
prevents the use of the KafkaJndiLoginModule
.Example configuration for disallowing login modules
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... jvmOptions: javaSystemProperties: - name: org.apache.kafka.disallowed.login.modules value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule # ...
Only allow trusted login modules and follow the latest advice from Kafka for the version you are using. As a best practice, you should explicitly disallow insecure login modules in your Kafka Connect configuration by using the
org.apache.kafka.disallowed.login.modules
system property.connector.client.config.override.policy
Set the
connector.client.config.override.policy
property toNone
to prevent connector configurations from overriding the Kafka Connect configuration and the consumers and producers it uses.Example configuration to specify connector override policy
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: connector.client.config.override.policy: None # ...
6.4.3.8. Switching from using the Kafka Connect API to using KafkaConnector custom resources
You can switch from using the Kafka Connect API to using KafkaConnector
custom resources to manage your connectors. To make the switch, do the following in the order shown:
-
Deploy
KafkaConnector
resources with the configuration to create your connector instances. -
Enable
KafkaConnector
resources in your Kafka Connect configuration by setting thestrimzi.io/use-connector-resources
annotation totrue
.
If you enable KafkaConnector
resources before creating them, you delete all connectors.
To switch from using KafkaConnector
resources to using the Kafka Connect API, first remove the annotation that enables the KafkaConnector
resources from your Kafka Connect configuration. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
When making the switch, check the status of the KafkaConnect
resource. The value of metadata.generation
(the current version of the deployment) must match status.observedGeneration
(the latest reconciliation of the resource). When the Kafka Connect cluster is Ready
, you can delete the KafkaConnector
resources.
6.4.4. List of Kafka Connect cluster resources
The following resources are created by the Cluster Operator in the OpenShift cluster:
- connect-cluster-name-connect
Name given to the following Kafka Connect resources:
-
Deployment that creates the Kafka Connect worker node pods (when
StableConnectIdentities
feature gate is disabled). -
StrimziPodSet that creates the Kafka Connect worker node pods (when
StableConnectIdentities
feature gate is enabled). -
Headless service that provides stable DNS names to the Connect pods (when
StableConnectIdentities
feature gate is enabled). - Pod Disruption Budget configured for the Kafka Connect worker nodes.
-
Deployment that creates the Kafka Connect worker node pods (when
- connect-cluster-name-connect-idx
-
Pods created by the Kafka Connect StrimziPodSet (when
StableConnectIdentities
feature gate is enabled). - connect-cluster-name-connect-api
- Service which exposes the REST interface for managing the Kafka Connect cluster.
- connect-cluster-name-config
- ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka broker pods.
6.5. Deploying Kafka MirrorMaker
Kafka MirrorMaker replicates data between two or more Kafka clusters, within or across data centers. This process is called mirroring to avoid confusion with the concept of Kafka partition replication. MirrorMaker consumes messages from a source cluster and republishes those messages to a target cluster.
Data replication across clusters supports scenarios that require the following:
- Recovery of data in the event of a system failure
- Consolidation of data from multiple source clusters for centralized analysis
- Restriction of data access to a specific cluster
- Provision of data at a specific location to improve latency
6.5.1. Deploying Kafka MirrorMaker to your OpenShift cluster
This procedure shows how to deploy a Kafka MirrorMaker cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a KafkaMirrorMaker
or KafkaMirrorMaker2
resource depending on the version of MirrorMaker deployed.
Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker
custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in AMQ Streams as well. The KafkaMirrorMaker
resource will be removed from AMQ Streams when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2
custom resource with the IdentityReplicationPolicy
.
AMQ Streams provides example configuration files. In this procedure, we use the following example files:
-
examples/mirror-maker/kafka-mirror-maker.yaml
-
examples/mirror-maker/kafka-mirror-maker-2.yaml
Prerequisites
Procedure
Deploy Kafka MirrorMaker to your OpenShift cluster:
For MirrorMaker:
oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml
For MirrorMaker 2:
oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1
my-mirror-maker
is the name of the Kafka MirrorMaker cluster.my-mm2-cluster
is the name of the Kafka MirrorMaker 2 cluster.A pod ID identifies each pod created.
With the default deployment, you install a single MirrorMaker or MirrorMaker 2 pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
Additional resources
6.5.2. List of Kafka MirrorMaker cluster resources
The following resources are created by the Cluster Operator in the OpenShift cluster:
- <mirror-maker-name>-mirror-maker
- Deployment which is responsible for creating the Kafka MirrorMaker pods.
- <mirror-maker-name>-config
- ConfigMap which contains ancillary configuration for the Kafka MirrorMaker, and is mounted as a volume by the Kafka broker pods.
- <mirror-maker-name>-mirror-maker
- Pod Disruption Budget configured for the Kafka MirrorMaker worker nodes.
6.6. Deploying Kafka Bridge
Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster.
6.6.1. Deploying Kafka Bridge to your OpenShift cluster
This procedure shows how to deploy a Kafka Bridge cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a KafkaBridge
resource.
AMQ Streams provides example configuration files. In this procedure, we use the following example file:
-
examples/bridge/kafka-bridge.yaml
Prerequisites
Procedure
Deploy Kafka Bridge to your OpenShift cluster:
oc apply -f examples/bridge/kafka-bridge.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0
my-bridge
is the name of the Kafka Bridge cluster.A pod ID identifies each pod created.
With the default deployment, you install a single Kafka Bridge pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
Additional resources
6.6.2. Exposing the Kafka Bridge service to your local machine
Use port forwarding to expose the AMQ Streams Kafka Bridge service to your local machine on http://localhost:8080.
Port forwarding is only suitable for development and testing purposes.
Procedure
List the names of the pods in your OpenShift cluster:
oc get pods -o name pod/kafka-consumer # ... pod/my-bridge-bridge-<pod_id>
Connect to the Kafka Bridge pod on port
8080
:oc port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &
NoteIf port 8080 on your local machine is already in use, use an alternative HTTP port, such as
8008
.
API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod.
6.6.3. Accessing the Kafka Bridge outside of OpenShift
After deployment, the AMQ Streams Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. These applications use the <kafka_bridge_name>-bridge-service
service to access the API.
If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features:
-
LoadBalancer
orNodePort
type services -
Ingress
resources (Kubernetes only) - OpenShift routes (OpenShift only)
If you decide to create Services, use the labels from the selector
of the <kafka_bridge_name>-bridge-service
service to configure the pods to which the service will route the traffic:
# ...
selector:
strimzi.io/cluster: kafka-bridge-name 1
strimzi.io/kind: KafkaBridge
#...
- 1
- Name of the Kafka Bridge custom resource in your OpenShift cluster.
6.6.4. List of Kafka Bridge cluster resources
The following resources are created by the Cluster Operator in the OpenShift cluster:
- bridge-cluster-name-bridge
- Deployment which is in charge to create the Kafka Bridge worker node pods.
- bridge-cluster-name-bridge-service
- Service which exposes the REST interface of the Kafka Bridge cluster.
- bridge-cluster-name-bridge-config
- ConfigMap which contains the Kafka Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods.
- bridge-cluster-name-bridge
- Pod Disruption Budget configured for the Kafka Bridge worker nodes.
6.7. Alternative standalone deployment options for AMQ Streams operators
You can perform a standalone deployment of the Topic Operator and User Operator. Consider a standalone deployment of these operators if you are using a Kafka cluster that is not managed by the Cluster Operator.
You deploy the operators to OpenShift. Kafka can be running outside of OpenShift. For example, you might be using a Kafka as a managed service. You adjust the deployment configuration for the standalone operator to match the address of your Kafka cluster.
6.7.1. Deploying the standalone Topic Operator
This procedure shows how to deploy the Topic Operator as a standalone component for topic management. You can use a standalone Topic Operator with a Kafka cluster that is not managed by the Cluster Operator.
A standalone deployment can operate with any Kafka cluster.
Standalone deployment files are provided with AMQ Streams. Use the 05-Deployment-strimzi-topic-operator.yaml
deployment file to deploy the Topic Operator. Add or set the environment variables needed to make a connection to a Kafka cluster.
The Topic Operator watches for KafkaTopic
resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the Topic Operator configuration. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator. If you want to use more than one Topic Operator, configure each of them to watch different namespaces. In this way, you can use Topic Operators with multiple Kafka clusters.
Prerequisites
You are running a Kafka cluster for the Topic Operator to connect to.
As long as the standalone Topic Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service.
Procedure
Edit the
env
properties in theinstall/topic-operator/05-Deployment-strimzi-topic-operator.yaml
standalone deployment file.Example standalone Topic Operator deployment configuration
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_ZOOKEEPER_CONNECT 4 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 5 value: "18000" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 7 value: "6" - name: STRIMZI_LOG_LEVEL 8 value: INFO - name: STRIMZI_TLS_ENABLED 9 value: "false" - name: STRIMZI_JAVA_OPTS 10 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 11 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_PUBLIC_CA 12 value: "false" - name: STRIMZI_TLS_AUTH_ENABLED 13 value: "false" - name: STRIMZI_SASL_ENABLED 14 value: "false" - name: STRIMZI_SASL_USERNAME 15 value: "admin" - name: STRIMZI_SASL_PASSWORD 16 value: "password" - name: STRIMZI_SASL_MECHANISM 17 value: "scram-sha-512" - name: STRIMZI_SECURITY_PROTOCOL 18 value: "SSL"
- 1
- The OpenShift namespace for the Topic Operator to watch for
KafkaTopic
resources. Specify the namespace of the Kafka cluster. - 2
- The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down.
- 3
- The label to identify the
KafkaTopic
resources managed by the Topic Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to theKafkaTopic
resource. If you deploy more than one Topic Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. - 4
- (ZooKeeper) The host and port pair of the address to connect to the ZooKeeper cluster. This must be the same ZooKeeper cluster that your Kafka cluster is using.
- 5
- (ZooKeeper) The ZooKeeper session timeout, in milliseconds. The default is
18000
(18 seconds). - 6
- The interval between periodic reconciliations, in milliseconds. The default is
120000
(2 minutes). - 7
- The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential backoff. Consider increasing this value when topic creation takes more time due to the number of partitions or replicas. The default is
6
attempts. - 8
- The level for printing logging messages. You can set the level to
ERROR
,WARNING
,INFO
,DEBUG
, orTRACE
. - 9
- Enables TLS support for encrypted communication with the Kafka brokers.
- 10
- (Optional) The Java options used by the JVM running the Topic Operator.
- 11
- (Optional) The debugging (
-D
) options set for the Topic Operator. - 12
- (Optional) Skips the generation of trust store certificates if TLS is enabled through
STRIMZI_TLS_ENABLED
. If this environment variable is enabled, the brokers must use a public trusted certificate authority for their TLS certificates. The default isfalse
. - 13
- (Optional) Generates key store certificates for mTLS authentication. Setting this to
false
disables client authentication with mTLS to the Kafka brokers. The default istrue
. - 14
- (Optional) Enables SASL support for client authentication when connecting to Kafka brokers. The default is
false
. - 15
- (Optional) The SASL username for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. - 16
- (Optional) The SASL password for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. - 17
- (Optional) The SASL mechanism for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. You can set the value toplain
,scram-sha-256
, orscram-sha-512
. - 18
- (Optional) The security protocol used for communication with Kafka brokers. The default value is "PLAINTEXT". You can set the value to
PLAINTEXT
,SSL
,SASL_PLAINTEXT
, orSASL_SSL
.
-
If you want to connect to Kafka brokers that are using certificates from a public certificate authority, set
STRIMZI_PUBLIC_CA
totrue
. Set this property totrue
, for example, if you are using Amazon AWS MSK service. If you enabled mTLS with the
STRIMZI_TLS_ENABLED
environment variable, specify the keystore and truststore used to authenticate connection to the Kafka cluster.Example mTLS configuration
# .... env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: "/path/to/truststore.p12" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: "TRUSTSTORE-PASSWORD" - name: STRIMZI_KEYSTORE_LOCATION 3 value: "/path/to/keystore.p12" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: "KEYSTORE-PASSWORD" # ...
Deploy the Topic Operator.
oc create -f install/topic-operator
Check the status of the deployment:
oc get deployments
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.7.1.1. (Preview) Deploying the standalone Topic Operator for unidirectional topic management
Unidirectional topic management maintains topics solely through KafkaTopic
resources. For more information on unidirectional topic management, see Section 9.1, “Topic management modes”.
If you want to try the preview of unidirectional topic management, follow these steps to deploy the standalone Topic Operator.
Procedure
Undeploy the current standalone Topic Operator.
Retain the
KafkaTopic
resources, which are picked up by the Topic Operator when it is deployed again.Edit the
Deployment
configuration for the standalone Topic Operator to remove any ZooKeeper-related environment variables:-
STRIMZI_ZOOKEEPER_CONNECT
-
STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS
-
TC_ZK_CONNECTION_TIMEOUT_MS
STRIMZI_USE_ZOOKEEPER_TOPIC_STORE
It is the presence or absence of the ZooKeeper variables that defines whether the unidirectional Topic Operator is used. Unidirectional topic management does not use ZooKeeper. If ZooKeeper environment variables are not present, the unidirectional Topic Operator is used. Otherwise, the bidirectional Topic Operator is used.
Other unused environment variables that can be removed if present:
-
STRIMZI_REASSIGN_THROTTLE
-
STRIMZI_REASSIGN_VERIFY_INTERVAL_MS
-
STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS
-
STRIMZI_TOPICS_PATH
-
STRIMZI_STORE_TOPIC
-
STRIMZI_STORE_NAME
-
STRIMZI_APPLICATION_ID
-
STRIMZI_STALE_RESULT_TIMEOUT_MS
-
(Optional) Set the
STRIMZI_USE_FINALIZERS
environment variable tofalse
:Additional configuration for unidirectional topic management
# ... env: - name: STRIMZI_USE_FINALIZERS value: "false"
Set this environment variable to
false
if you do not want to use finalizers to control topic deletion.Example standalone Topic Operator deployment configuration for unidirectional topic management
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: "120000" - name: STRIMZI_LOG_LEVEL value: INFO - name: STRIMZI_TLS_ENABLED value: "false" - name: STRIMZI_JAVA_OPTS value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_PUBLIC_CA value: "false" - name: STRIMZI_TLS_AUTH_ENABLED value: "false" - name: STRIMZI_SASL_ENABLED value: "false" - name: STRIMZI_SASL_USERNAME value: "admin" - name: STRIMZI_SASL_PASSWORD value: "password" - name: STRIMZI_SASL_MECHANISM value: "scram-sha-512" - name: STRIMZI_SECURITY_PROTOCOL value: "SSL" - name: STRIMZI_USE_FINALIZERS value: "true"
- Deploy the standalone Topic Operator in the standard way.
6.7.2. Deploying the standalone User Operator
This procedure shows how to deploy the User Operator as a standalone component for user management. You can use a standalone User Operator with a Kafka cluster that is not managed by the Cluster Operator.
A standalone deployment can operate with any Kafka cluster.
Standalone deployment files are provided with AMQ Streams. Use the 05-Deployment-strimzi-user-operator.yaml
deployment file to deploy the User Operator. Add or set the environment variables needed to make a connection to a Kafka cluster.
The User Operator watches for KafkaUser
resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the User Operator configuration. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator. If you want to use more than one User Operator, configure each of them to watch different namespaces. In this way, you can use the User Operator with multiple Kafka clusters.
Prerequisites
You are running a Kafka cluster for the User Operator to connect to.
As long as the standalone User Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service.
Procedure
Edit the following
env
properties in theinstall/user-operator/05-Deployment-strimzi-user-operator.yaml
standalone deployment file.Example standalone User Operator deployment configuration
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-user-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_WORK_QUEUE_SIZE 7 value: 10000 - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8 value: 10 - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9 value: 4 - name: STRIMZI_LOG_LEVEL 10 value: INFO - name: STRIMZI_GC_LOG_ENABLED 11 value: "true" - name: STRIMZI_CA_VALIDITY 12 value: "365" - name: STRIMZI_CA_RENEWAL 13 value: "30" - name: STRIMZI_JAVA_OPTS 14 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_SECRET_PREFIX 16 value: "kafka-" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17 value: "true" - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18 value: '* * 8-10 * * ?;* * 14-15 * * ?' - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19 value: | default.api.timeout.ms=120000 request.timeout.ms=60000
- 1
- The OpenShift namespace for the User Operator to watch for
KafkaUser
resources. Only one namespace can be specified. - 2
- The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down.
- 3
- The OpenShift
Secret
that contains the public key (ca.crt
) value of the CA (certificate authority) that signs new user certificates for mTLS authentication. - 4
- The OpenShift
Secret
that contains the private key (ca.key
) value of the CA that signs new user certificates for mTLS authentication. - 5
- The label to identify the
KafkaUser
resources managed by the User Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to theKafkaUser
resource. If you deploy more than one User Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. - 6
- The interval between periodic reconciliations, in milliseconds. The default is
120000
(2 minutes). - 7
- The size of the controller event queue. The size of the queue should be at least as big as the maximal amount of users you expect the User Operator to operate. The default is
1024
. - 8
- The size of the worker pool for reconciling the users. Bigger pool might require more resources, but it will also handle more
KafkaUser
resources The default is50
. - 9
- The size of the worker pool for Kafka Admin API and OpenShift operations. Bigger pool might require more resources, but it will also handle more
KafkaUser
resources The default is4
. - 10
- The level for printing logging messages. You can set the level to
ERROR
,WARNING
,INFO
,DEBUG
, orTRACE
. - 11
- Enables garbage collection (GC) logging. The default is
true
. - 12
- The validity period for the CA. The default is
365
days. - 13
- The renewal period for the CA. The renewal period is measured backwards from the expiry date of the current certificate. The default is
30
days to initiate certificate renewal before the old certificates expire. - 14
- (Optional) The Java options used by the JVM running the User Operator
- 15
- (Optional) The debugging (
-D
) options set for the User Operator - 16
- (Optional) Prefix for the names of OpenShift secrets created by the User Operator.
- 17
- (Optional) Indicates whether the Kafka cluster supports management of authorization ACL rules using the Kafka Admin API. When set to
false
, the User Operator will reject all resources withsimple
authorization ACL rules. This helps to avoid unnecessary exceptions in the Kafka cluster logs. The default istrue
. - 18
- (Optional) Semi-colon separated list of Cron Expressions defining the maintenance time windows during which the expiring user certificates will be renewed.
- 19
- (Optional) Configuration options for configuring the Kafka Admin client used by the User Operator in the properties format.
If you are using mTLS to connect to the Kafka cluster, specify the secrets used to authenticate connection. Otherwise, go to the next step.
Example mTLS configuration
# .... env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs # ..."
- 1
- The OpenShift
Secret
that contains the public key (ca.crt
) value of the CA that signs Kafka broker certificates. - 2
- The OpenShift
Secret
that contains the certificate public key (entity-operator.crt
) and private key (entity-operator.key
) that is used for mTLS authentication against the Kafka cluster.
Deploy the User Operator.
oc create -f install/user-operator
Check the status of the deployment:
oc get deployments
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
Chapter 7. Enabling AMQ Streams feature gates
AMQ Streams operators use feature gates to enable or disable specific features and functions. By enabling a feature gate, you alter the behavior of the corresponding operator, thereby introducing the feature to your AMQ Streams deployment.
A feature gate might be enabled or disabled by default, depending on its level of maturity.
To modify a feature gate’s default state, use the STRIMZI_FEATURE_GATES
environment variable in the operator’s configuration. You can modify multiple feature gates using this single environment variable. Specify a comma-separated list of feature gate names and prefixes. A +
prefix enables the feature gate and a -
prefix disables it.
Example feature gate configuration that enables FeatureGate1
and disables FeatureGate2
env: - name: STRIMZI_FEATURE_GATES value: +FeatureGate1,-FeatureGate2
7.1. ControlPlaneListener feature gate
The ControlPlaneListener
feature gate has moved to GA, which means it is now permanently enabled and cannot be disabled. With ControlPlaneListener
enabled, the connections between the Kafka controller and brokers use an internal control plane listener on port 9090. Replication of data between brokers, as well as internal connections from AMQ Streams operators, Cruise Control, or the Kafka Exporter use the replication listener on port 9091.
With the ControlPlaneListener
feature gate permanently enabled, it is no longer possible to upgrade or downgrade directly between AMQ Streams 1.7 and earlier and AMQ Streams 2.3 and newer. You have to first upgrade or downgrade through one of the AMQ Streams versions in-between, disable the ControlPlaneListener
feature gate, and then downgrade or upgrade (with the feature gate enabled) to the target version.
7.2. ServiceAccountPatching feature gate
The ServiceAccountPatching
feature gate has moved to GA, which means it is now permanently enabled and cannot be disabled. With ServiceAccountPatching
enabled, the Cluster Operator always reconciles service accounts and updates them when needed. For example, when you change service account labels or annotations using the template
property of a custom resource, the operator automatically updates them on the existing service account resources.
7.3. UseStrimziPodSets feature gate
The UseStrimziPodSets
feature gate has moved to GA, which means it is now permanently enabled and cannot be disabled. Support for StatefulSets
has been removed and AMQ Streams is now always using StrimziPodSets
to manage Kafka and ZooKeeper pods.
With the UseStrimziPodSets
feature gate permanently enabled, it is no longer possible to downgrade directly from AMQ Streams 2.4 and newer to AMQ Streams 2.0 or earlier. You have to first downgrade through one of the AMQ Streams versions in-between, disable the UseStrimziPodSets
feature gate, and then downgrade to AMQ Streams 2.0 or earlier.
7.4. (Preview) UseKRaft feature gate
The UseKRaft
feature gate has a default state of disabled.
The UseKRaft
feature gate deploys the Kafka cluster in the KRaft (Kafka Raft metadata) mode without ZooKeeper. ZooKeeper and KRaft are mechanisms used to manage metadata and coordinate operations in Kafka clusters. KRaft mode eliminates the need for an external coordination service like ZooKeeper. In KRaft mode, Kafka nodes take on the roles of brokers, controllers, or both. They collectively manage the metadata, which is replicated across partitions. Controllers are responsible for coordinating operations and maintaining the cluster’s state.
This feature gate is currently intended only for development and testing.
KRaft mode is not ready for production in Apache Kafka or in AMQ Streams.
Enabling the UseKRaft
feature gate requires the KafkaNodePools
feature gate to be enabled as well. To deploy a Kafka cluster in KRaft mode, you must use the KafkaNodePool
resources. For more details and examples, see Section 6.3.2, “(Preview) Deploying Kafka node pools”.
When the UseKRaft
feature gate is enabled, the Kafka cluster is deployed without ZooKeeper. The .spec.zookeeper
properties in the Kafka
custom resource are ignored, but still need to be present. The UseKRaft
feature gate provides an API that configures Kafka cluster nodes and their roles. The API is still in development and is expected to change before the KRaft mode is production-ready.
Currently, the KRaft mode in AMQ Streams has the following major limitations:
- Moving from Kafka clusters with ZooKeeper to KRaft clusters or the other way around is not supported.
- Controller-only nodes cannot undergo rolling updates or be updated individually.
- Upgrades and downgrades of Apache Kafka versions or the AMQ Streams operator are not supported. Users might need to delete the cluster, upgrade the operator and deploy a new Kafka cluster.
-
Only the Unidirectional Topic Operator is supported in KRaft mode. You can enable it using the
UnidirectionalTopicOperator
feature gate. The Bidirectional Topic Operator is not supported and when theUnidirectionalTopicOperator
feature gate is not enabled, thespec.entityOperator.topicOperator
property must be removed from theKafka
custom resource. -
JBOD storage is not supported. The
type: jbod
storage can be used, but the JBOD array can contain only one disk.
Enabling the UseKRaft feature gate
To enable the UseKRaft
feature gate, specify +UseKRaft,+KafkaNodePools
in the STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration.
7.5. StableConnectIdentities feature gate
The StableConnectIdentities
feature gate has a default state of disabled.
The StableConnectIdentities
feature gate uses StrimziPodSet
resources to manage Kafka Connect and Kafka MirrorMaker 2 pods instead of using OpenShift Deployment
resources. StrimziPodSets
give the pods stable names and stable addresses, which do not change during rolling upgrades. This helps to minimize the number of rebalances of connector tasks.
Enabling the StableConnectIdentities
feature gate
To enable the StableConnectIdentities
feature gate, specify +StableConnectIdentities
in the STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration.
The StableConnectIdentities
feature gate must be disabled when downgrading to AMQ Streams 2.3 and earlier versions.
7.6. (Preview) KafkaNodePools feature gate
The KafkaNodePools
feature gate has a default state of disabled.
The KafkaNodePools
feature gate introduces a new KafkaNodePool
custom resource that enables the configuration of different pools of Apache Kafka nodes.
A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. Each pool has its own unique configuration, which includes mandatory settings such as the number of replicas, storage configuration, and a list of assigned roles. You can assign the controller role, broker role, or both roles to all nodes in the pool in the .spec.roles
field. When used with a ZooKeeper-based Apache Kafka cluster, it must be set to the broker
role. When used with the UseKRaft
feature gate, it can be set to broker
, controller
, or both.
In addition, a node pool can have its own configuration of resource requests and limits, Java JVM options, and resource templates. Configuration options not set in the KafkaNodePool
resource are inherited from the Kafka
custom resource.
The KafkaNodePool
resources use a strimzi.io/cluster
label to indicate to which Kafka cluster they belong. The label must be set to the name of the Kafka
custom resource.
Examples of the KafkaNodePool
resources can be found in the example configuration files provided by AMQ Streams.
Enabling the KafkaNodePools feature gate
To enable the KafkaNodePools
feature gate, specify +KafkaNodePools
in the STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration. The Kafka
custom resource using the node pools must also have the annotation strimzi.io/node-pools: enabled
.
7.7. (Preview) UnidirectionalTopicOperator feature gate
The UnidirectionalTopicOperator
feature gate has a default state of disabled.
The UnidirectionalTopicOperator
feature gate introduces a unidirectional topic management mode for creating Kafka topics using the KafkaTopic
resource. Unidirectional mode is compatible with using KRaft for cluster management. With unidirectional mode, you create Kafka topics using the KafkaTopic
resource, which are then managed by the Topic Operator. Any configuration changes to a topic outside the KafkaTopic
resource are reverted. For more information on topic management, see Section 9.1, “Topic management modes”.
Enabling the UnidirectionalTopicOperator feature gate
To enable the UnidirectionalTopicOperator
feature gate, specify +UnidirectionalTopicOperator
in the STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration. For the KafkaTopic
custom resource to use this feature, the strimzi.io/managed
annotation is set to true
by default.
7.8. Feature gate releases
Feature gates have three stages of maturity:
- Alpha — typically disabled by default
- Beta — typically enabled by default
- General Availability (GA) — typically always enabled
Alpha stage features might be experimental or unstable, subject to change, or not sufficiently tested for production use. Beta stage features are well tested and their functionality is not likely to change. GA stage features are stable and should not change in the future. Alpha and beta stage features are removed if they do not prove to be useful.
-
The
ControlPlaneListener
feature gate moved to GA stage in AMQ Streams 2.3. It is now permanently enabled and cannot be disabled. -
The
ServiceAccountPatching
feature gate moved to GA stage in AMQ Streams 2.3. It is now permanently enabled and cannot be disabled. -
The
UseStrimziPodSets
feature gate moved to GA stage in AMQ Streams 2.5 and the support for StatefulSets is completely removed. It is now permanently enabled and cannot be disabled. -
The
UseKRaft
feature gate is available for development only and does not currently have a planned release for moving to the beta phase. -
The
StableConnectIdentities
feature gate is in alpha stage and is disabled by default. -
The
KafkaNodePools
feature gate is in alpha stage and is disabled by default. -
The
UnidirectionalTopicOperator
feature gate is in alpha stage and is disabled by default.
Feature gates might be removed when they reach GA. This means that the feature was incorporated into the AMQ Streams core features and can no longer be disabled.
Table 7.1. Feature gates and the AMQ Streams versions when they moved to alpha, beta, or GA
Feature gate | Alpha | Beta | GA |
---|---|---|---|
| 1.8 | 2.0 | 2.3 |
| 1.8 | 2.0 | 2.3 |
| 2.1 | 2.3 | 2.5 |
| 2.2 | - | - |
| 2.4 | - | - |
| 2.5 | - | - |
| 2.5 | - | - |
If a feature gate is enabled, you may need to disable it before upgrading or downgrading from a specific AMQ Streams version. The following table shows which feature gates you need to disable when upgrading or downgrading AMQ Streams versions.
Table 7.2. Feature gates to disable when upgrading or downgrading AMQ Streams
Disable Feature gate | Upgrading from AMQ Streams version | Downgrading to AMQ Streams version |
---|---|---|
| 1.7 and earlier | 1.7 and earlier |
| - | 2.0 and earlier |
| - | 2.3 and earlier |
Chapter 8. Configuring a deployment
Configure and manage an AMQ Streams deployment to your precise needs using AMQ Streams custom resources. AMQ Streams provides example custom resources with each release, allowing you to configure and create instances of supported Kafka components. Fine-tune your deployment by configuring custom resources to include additional features according to your specific requirements. For specific areas of configuration, namely metrics, logging, and external configuration for Kafka Connect connectors, you can also use ConfigMap
resources. By using a ConfigMap
resource to incorporate configuration, you centralize maintenance. You can also use configuration providers to load configuration from external sources, which we recommend for supplying the credentials for Kafka Connect connector configuration.
Use custom resources to configure and create instances of the following components:
- Kafka clusters
- Kafka Connect clusters
- Kafka MirrorMaker
- Kafka Bridge
- Cruise Control
You can also use custom resource configuration to manage your instances or modify your deployment to introduce additional features. This might include configuration that supports the following:
- (Preview) Specifying node pools
- Securing client access to Kafka brokers
- Accessing Kafka brokers from outside the cluster
- Creating topics
- Creating users (clients)
- Controlling feature gates
- Changing logging frequency
- Allocating resource limits and requests
- Introducing features, such as AMQ Streams Drain Cleaner, Cruise Control, or distributed tracing.
The AMQ Streams Custom Resource API Reference describes the properties you can use in your configuration.
Labels applied to a custom resource are also applied to the OpenShift resources making up its cluster. This provides a convenient mechanism for resources to be labeled as required.
Applying changes to a custom resource configuration file
You add configuration to a custom resource using spec
properties. After adding the configuration, you can use oc
to apply the changes to a custom resource configuration file:
oc apply -f <kafka_configuration_file>
8.1. Using example configuration files
Further enhance your deployment by incorporating additional supported configuration. Example configuration files are provided with the downloadable release artifacts from the AMQ Streams software downloads page.
The example files include only the essential properties and values for custom resources by default. You can download and apply the examples using the oc
command-line tool. The examples can serve as a starting point when building your own Kafka component configuration for deployment.
If you installed AMQ Streams using the Operator, you can still download the example files and use them to upload configuration.
The release artifacts include an examples
directory that contains the configuration examples.
Example configuration files provided with AMQ Streams
examples ├── user 1 ├── topic 2 ├── security 3 │ ├── tls-auth │ ├── scram-sha-512-auth │ └── keycloak-authorization ├── mirror-maker 4 ├── metrics 5 ├── kafka 6 │ └── nodepools 7 ├── cruise-control 8 ├── connect 9 └── bridge 10
- 1
KafkaUser
custom resource configuration, which is managed by the User Operator.- 2
KafkaTopic
custom resource configuration, which is managed by Topic Operator.- 3
- Authentication and authorization configuration for Kafka components. Includes example configuration for TLS and SCRAM-SHA-512 authentication. The Red Hat Single Sign-On example includes
Kafka
custom resource configuration and a Red Hat Single Sign-On realm specification. You can use the example to try Red Hat Single Sign-On authorization services. There is also an example with enabledoauth
authentication andkeycloak
authorization metrics. - 4
Kafka
custom resource configuration for a deployment of Mirror Maker. Includes example configuration for replication policy and synchronization frequency.- 5
- Metrics configuration, including Prometheus installation and Grafana dashboard files.
- 6
Kafka
custom resource configuration for a deployment of Kafka. Includes example configuration for an ephemeral or persistent single or multi-node deployment.- 7
- (Preview)
KafkaNodePool
configuration for Kafka nodes in a Kafka cluster. Includes example configuration for nodes in clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper. - 8
Kafka
custom resource with a deployment configuration for Cruise Control. IncludesKafkaRebalance
custom resources to generate optimization proposals from Cruise Control, with example configurations to use the default or user optimization goals.- 9
KafkaConnect
andKafkaConnector
custom resource configuration for a deployment of Kafka Connect. Includes example configurations for a single or multi-node deployment.- 10
KafkaBridge
custom resource configuration for a deployment of Kafka Bridge.
8.2. Configuring Kafka
Update the spec
properties of the Kafka
custom resource to configure your Kafka deployment.
As well as configuring Kafka, you can add configuration for ZooKeeper and the AMQ Streams Operators. Common configuration properties, such as logging and healthchecks, are configured independently for each component.
Configuration options that are particularly important include the following:
- Resource requests (CPU / Memory)
- JVM options for maximum and minimum memory allocation
- Listeners for connecting clients to Kafka brokers (and authentication of clients)
- Authentication
- Storage
- Rack awareness
- Metrics
- Cruise Control for cluster rebalancing
For a deeper understanding of the Kafka cluster configuration options, refer to the AMQ Streams Custom Resource API Reference.
Kafka versions
The inter.broker.protocol.version
property for the Kafka config
must be the version supported by the specified Kafka version (spec.kafka.version
). The property represents the version of Kafka protocol used in a Kafka cluster.
From Kafka 3.0.0, when the inter.broker.protocol.version
is set to 3.0
or higher, the log.message.format.version
option is ignored and doesn’t need to be set.
An update to the inter.broker.protocol.version
is required when upgrading your Kafka version. For more information, see Upgrading Kafka.
Managing TLS certificates
When deploying Kafka, the Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within your cluster. If required, you can manually renew the cluster and clients CA certificates before their renewal period starts. You can also replace the keys used by the cluster and clients CA certificates. For more information, see Renewing CA certificates manually and Replacing private keys.
Example Kafka
custom resource configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 1 version: 3.5.0 2 logging: 3 type: inline loggers: kafka.root.logger.level: INFO resources: 4 requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" readinessProbe: 5 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 6 -Xms: 8192m -Xmx: 8192m image: my-org/my-image:latest 7 listeners: 8 - name: plain 9 port: 9092 10 type: internal 11 tls: false 12 configuration: useServiceDnsDomain: true 13 - name: tls port: 9093 type: internal tls: true authentication: 14 type: tls - name: external 15 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 16 secretName: my-secret certificate: my-certificate.crt key: my-key.key authorization: 17 type: simple config: 18 auto.create.topics.enable: "false" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: "3.5" storage: 19 type: persistent-claim 20 size: 10000Gi rack: 21 topologyKey: topology.kubernetes.io/zone metricsConfig: 22 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 23 name: my-config-map key: my-key # ... zookeeper: 24 replicas: 3 25 logging: 26 type: inline loggers: zookeeper.root.logger: INFO resources: requests: memory: 8Gi cpu: "2" limits: memory: 8Gi cpu: "2" jvmOptions: -Xms: 4096m -Xmx: 4096m storage: type: persistent-claim size: 1000Gi metricsConfig: # ... entityOperator: 27 tlsSidecar: 28 resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 29 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 30 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" kafkaExporter: 31 # ... cruiseControl: 32 # ...
- 1
- The number of replica nodes.
- 2
- Kafka version, which can be changed to a supported version by following the upgrade procedure.
- 3
- Kafka loggers and log levels added directly (
inline
) or indirectly (external
) through a ConfigMap. A custom Log4j configuration must be placed under thelog4j.properties
key in the ConfigMap. For the Kafkakafka.root.logger.level
logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. - 4
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed. - 5
- Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- 6
- JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka.
- 7
- ADVANCED OPTION: Container image configuration, which is recommended only in special situations.
- 8
- Listeners configure how clients connect to the Kafka cluster via bootstrap addresses. Listeners are configured as internal or external listeners for connection from inside or outside the OpenShift cluster.
- 9
- Name to identify the listener. Must be unique within the Kafka cluster.
- 10
- Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients.
- 11
- Listener type specified as
internal
orcluster-ip
(to expose Kafka using per-brokerClusterIP
services), or for external listeners, asroute
(OpenShift only),loadbalancer
,nodeport
oringress
(Kubernetes only). - 12
- Enables TLS encryption for each listener. Default is
false
. TLS encryption has to be enabled, by setting it totrue
, forroute
andingress
type listeners. - 13
- Defines whether the fully-qualified DNS names including the cluster service suffix (usually
.cluster.local
) are assigned. - 14
- Listener authentication mechanism specified as mTLS, SCRAM-SHA-512, or token-based OAuth 2.0.
- 15
- External listener configuration specifies how the Kafka cluster is exposed outside OpenShift, such as through a
route
,loadbalancer
ornodeport
. - 16
- Optional configuration for a Kafka listener certificate managed by an external CA (certificate authority). The
brokerCertChainAndKey
specifies aSecret
that contains a server certificate and a private key. You can configure Kafka listener certificates on any listener with enabled TLS encryption. - 17
- Authorization enables simple, OAUTH 2.0, or OPA authorization on the Kafka broker. Simple authorization uses the
AclAuthorizer
Kafka plugin. - 18
- Broker configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams.
- 19
- Storage size for persistent volumes may be increased and additional volumes may be added to JBOD storage.
- 20
- Persistent storage has additional configuration options, such as a storage
id
andclass
for dynamic volume provisioning. - 21
- Rack awareness configuration to spread replicas across different racks, data centers, or availability zones. The
topologyKey
must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standardtopology.kubernetes.io/zone
label. - 22
- Prometheus metrics enabled. In this example, metrics are configured for the Prometheus JMX Exporter (the default metrics exporter).
- 23
- Rules for exporting metrics in Prometheus format to a Grafana dashboard through the Prometheus JMX Exporter, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under
metricsConfig.valueFrom.configMapKeyRef.key
. - 24
- ZooKeeper-specific configuration, which contains properties similar to the Kafka configuration.
- 25
- The number of ZooKeeper nodes. ZooKeeper clusters or ensembles usually run with an odd number of nodes, typically three, five, or seven. The majority of nodes must be available in order to maintain an effective quorum. If the ZooKeeper cluster loses its quorum, it will stop responding to clients and the Kafka brokers will stop working. Having a stable and highly available ZooKeeper cluster is crucial for AMQ Streams.
- 26
- ZooKeeper loggers and log levels.
- 27
- Entity Operator configuration, which specifies the configuration for the Topic Operator and User Operator.
- 28
- Entity Operator TLS sidecar configuration. Entity Operator uses the TLS sidecar for secure communication with ZooKeeper.
- 29
- Specified Topic Operator loggers and log levels. This example uses
inline
logging. - 30
- Specified User Operator loggers and log levels.
- 31
- Kafka Exporter configuration. Kafka Exporter is an optional component for extracting metrics data from Kafka brokers, in particular consumer lag data. For Kafka Exporter to be able to work properly, consumer groups need to be in use.
- 32
- Optional configuration for Cruise Control, which is used to rebalance the Kafka cluster.
8.2.1. Setting limits on brokers using the Kafka Static Quota plugin
Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You enable the plugin and set limits by configuring the Kafka
resource. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers.
You can set byte-rate thresholds for producer and consumer bandwidth. The total limit is distributed across all clients accessing the broker. For example, you can set a byte-rate threshold of 40 MBps for producers. If two producers are running, they are each limited to a throughput of 20 MBps.
Storage quotas throttle Kafka disk storage limits between a soft limit and hard limit. The limits apply to all available disk space. Producers are slowed gradually between the soft and hard limit. The limits prevent disks filling up too quickly and exceeding their capacity. Full disks can lead to issues that are hard to rectify. The hard limit is the maximum storage limit.
For JBOD storage, the limit applies across all disks. If a broker is using two 1 TB disks and the quota is 1.1 TB, one disk might fill and the other disk will be almost empty.
Prerequisites
- The Cluster Operator that manages the Kafka cluster is running.
Procedure
Add the plugin properties to the
config
of theKafka
resource.The plugin properties are shown in this example configuration.
Example Kafka Static Quota plugin configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6
- 1
- Loads the Kafka Static Quota plugin.
- 2
- Sets the producer byte-rate threshold. 1 MBps in this example.
- 3
- Sets the consumer byte-rate threshold. 1 MBps in this example.
- 4
- Sets the lower soft limit for storage. 400 GB in this example.
- 5
- Sets the higher hard limit for storage. 500 GB in this example.
- 6
- Sets the interval in seconds between checks on storage. 5 seconds in this example. You can set this to 0 to disable the check.
Update the resource.
oc apply -f <kafka_configuration_file>
Additional resources
8.2.2. Default ZooKeeper configuration values
When deploying ZooKeeper with AMQ Streams, some of the default configuration set by AMQ Streams differs from the standard ZooKeeper defaults. This is because AMQ Streams sets a number of ZooKeeper properties with values that are optimized for running ZooKeeper within an OpenShift environment.
The default configuration for key ZooKeeper properties in AMQ Streams is as follows:
Table 8.1. Default ZooKeeper Properties in AMQ Streams
Property | Default value | Description |
---|---|---|
| 2000 | The length of a single tick in milliseconds, which determines the length of a session timeout. |
| 5 | The maximum number of ticks that a follower is allowed to fall behind the leader in a ZooKeeper cluster. |
| 2 | The maximum number of ticks that a follower is allowed to be out of sync with the leader in a ZooKeeper cluster. |
| 1 |
Enables the |
| false | Flag to disable the ZooKeeper admin server. The admin server is not used by AMQ Streams. |
Modifying these default values as zookeeper.config
in the Kafka
custom resource may impact the behavior and performance of your ZooKeeper cluster.
8.3. (Preview) Configuring node pools
Update the spec
properties of the KafkaNodePool
custom resource to configure a node pool deployment.
The node pools feature is available as a preview. Node pools are not enabled by default, so you must enable the KafkaNodePools
feature gate before using them.
A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. Each pool has its own unique configuration, which includes mandatory settings for the number of replicas, roles, and storage allocation.
Optionally, you can also specify values for the following properties:
-
resources
to specify memory and cpu requests and limits -
template
to specify custom configuration for pods and other OpenShift resources -
jvmOptions
to specify custom JVM configuration for heap size, runtime and other options
The Kafka
resource represents the configuration for all nodes in the Kafka cluster. The KafkaNodePool
resource represents the configuration for nodes only in the node pool. If a configuration property is not specified in KafkaNodePool
, it is inherited from the Kafka
resource. Configuration specified in the KafkaNodePool
resource takes precedence if set in both resources. For example, if both the node pool and Kafka configuration includes jvmOptions
, the values specified in the node pool configuration are used. When -Xmx: 1024m
is set in KafkaNodePool.spec.jvmOptions
and -Xms: 512m
is set in Kafka.spec.kafka.jvmOptions
, the node uses the value from its node pool configuration.
Properties from Kafka
and KafkaNodePool
schemas are not combined. To clarify, if KafkaNodePool.spec.template
includes only podSet.metadata.labels
, and Kafka.spec.kafka.template
includes podSet.metadata.annotations
and pod.metadata.labels
, the template values from the Kafka configuration are ignored since there is a template value in the node pool configuration.
Node pools can be used with Kafka clusters that operate in KRaft mode (using Kafka Raft metadata) or use ZooKeeper for cluster management. If you are using KRaft mode, you can specify roles for all nodes in the node pool to operate as brokers, controllers, or both. If you are using ZooKeeper, nodes must be set as brokers only.
KRaft mode is not ready for production in Apache Kafka or in AMQ Streams.
For a deeper understanding of the node pool configuration options, refer to the AMQ Streams Custom Resource API Reference.
While the KafkaNodePools
feature gate that enables node pools is in alpha phase, replica and storage configuration properties in the KafkaNodePool
resource must also be present in the Kafka
resource. The configuration in the Kafka
resource is ignored when node pools are used. Similarly, ZooKeeper configuration properties must also be present in the Kafka
resource when using KRaft mode. These properties are also ignored.
Example configuration for a node pool in a cluster using ZooKeeper
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a 1 labels: strimzi.io/cluster: my-cluster 2 spec: replicas: 3 3 roles: - broker 4 storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: 6 requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12"
- 1
- Unique name for the node pool.
- 2
- The Kafka cluster the node pool belongs to. A node pool can only belong to a single cluster.
- 3
- Number of replicas for the nodes.
- 4
- Roles for the nodes in the node pool, which can only be
broker
when using Kafka with ZooKeeper. - 5
- Storage specification for the nodes.
- 6
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed.
Example configuration for a node pool in a cluster using KRaft mode
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: kraft-dual-role
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
roles: 1
- controller
- broker
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 20Gi
deleteClaim: false
resources:
requests:
memory: 64Gi
cpu: "8"
limits:
memory: 64Gi
cpu: "12"
- 1
- Roles for the nodes in the node pool. In this example, the nodes have dual roles as controllers and brokers.
The configuration for the Kafka
resource must be suitable for KRaft mode. Currently, KRaft mode has a number of limitations.
8.3.1. (Preview) Assigning IDs to node pools for scaling operations
This procedure describes how to use annotations for advanced node ID handling by the Cluster Operator when performing scaling operations on node pools. You specify the node IDs to use, rather than the Cluster Operator using the next ID in sequence. Management of node IDs in this way gives greater control.
To add a range of IDs, you assign the following annotations to the KafkaNodePool
resource:
-
strimzi.io/next-node-ids
to add a range of IDs that are used for new brokers -
strimzi.io/remove-node-ids
to add a range of IDs for removing existing brokers
You can specify an array of individual node IDs, ID ranges, or a combination of both. For example, you can specify the following range of IDs: [0, 1, 2, 10-20, 30]
for scaling up the Kafka node pool. This format allows you to specify a combination of individual node IDs (0
, 1
, 2
, 30
) as well as a range of IDs (10-20
).
In a typical scenario, you might specify a range of IDs for scaling up and a single node ID to remove a specific node when scaling down.
In this procedure, we add the scaling annotations to node pools as follows:
-
pool-a
is assigned a range of IDs for scaling up -
pool-b
is assigned a range of IDs for scaling down
During the scaling operation, IDs are used as follows:
- Scale up picks up the lowest available ID in the range for the new node.
- Scale down removes the node with the highest available ID in the range.
If there are gaps in the sequence of node IDs assigned in the node pool, the next node to be added is assigned an ID that fills the gap.
The annotations don’t need to be updated after every scaling operation. Any unused IDs are still valid for the next scaling event.
The Cluster Operator allows you to specify a range of IDs in either ascending or descending order, so you can define them in the order the nodes are scaled. For example, when scaling up, you can specify a range such as [1000-1999]
, and the new nodes are assigned the next lowest IDs: 1000
, 1001
, 1002
, 1003
, and so on. Conversely, when scaling down, you can specify a range like [1999-1000]
, ensuring that nodes with the next highest IDs are removed: 1003
, 1002
, 1001
, 1000
, and so on.
If you don’t specify an ID range using the annotations, the Cluster Operator follows its default behavior for handling IDs during scaling operations. Node IDs start at 0 (zero) and run sequentially across the Kafka cluster. The next lowest ID is assigned to a new node. Gaps to node IDs are filled across the cluster. This means that they might not run sequentially within a node pool. The default behavior for scaling up is to add the next lowest available node ID across the cluster; and for scaling down, it is to remove the node in the node pool with the highest available node ID. The default approach is also applied if the assigned range of IDs is misformatted, the scaling up range runs out of IDs, or the scaling down range does not apply to any in-use nodes.
Prerequisites
Procedure
Annotate the node pool with the IDs to use when scaling up or scaling down, as shown in the following examples.
IDs for scaling up are assigned to node pool
pool-a
:Assigning IDs for scaling up
oc annotate kafkanodepool pool-a strimzi.io/next-node-ids="[0,1,2,10-20,30]"
The lowest available ID from this range is used when adding a node to
pool-a
.IDs for scaling down are assigned to node pool
pool-b
:Assigning IDs for scaling down
oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids="[60-50,9,8,7]"
The highest available ID from this range is removed when scaling down
pool-b
.You can now scale the node pool.
For more information, see the following:
On reconciliation, a warning is given if the annotations are misformatted.
8.3.2. (Preview) Adding nodes to a node pool
This procedure describes how to scale up a node pool to add new nodes.
In this procedure, we start with three nodes for node pool pool-a
:
Kafka nodes in the node pool
NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-2 1/1 Running 0
Node IDs are appended to the name of the node on creation. We add node my-cluster-pool-a-kafka-3
, which has a node ID of 3
.
During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID.
Prerequisites
- The Cluster Operator must be deployed.
(Optional) For scale up operations, you can specify the range of node IDs to use.
If you have assigned a range of node IDs for the operation, the ID of the node being added is determined by the sequence of nodes given. Otherwise, the lowest available node ID across the cluster is used.
Procedure
Create a new node in the node pool.
For example, node pool
pool-a
has three replicas. We add a node by increasing the number of replicas:oc scale kafkanodepool pool-a --replicas=4
Check the status of the deployment and wait for the pods in the node pool to be created and have a status of
READY
.oc get pods -n <my_cluster_operator_namespace>
Output shows four Kafka nodes in the node pool
NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-2 1/1 Running 0 my-cluster-pool-a-kafka-3 1/1 Running 0
Reassign the partitions after increasing the number of nodes in the node pool.
After scaling up a node pool, you can use the Cruise Control
add-brokers
mode to move partition replicas from existing brokers to the newly added brokers.
8.3.3. (Preview) Removing nodes from a node pool
This procedure describes how to scale down a node pool to remove nodes.
In this procedure, we start with four nodes for node pool pool-a
:
Kafka nodes in the node pool
NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-2 1/1 Running 0 my-cluster-pool-a-kafka-3 1/1 Running 0
Node IDs are appended to the name of the node on creation. We remove node my-cluster-pool-a-kafka-3
, which has a node ID of 3
.
During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID.
Prerequisites
- The Cluster Operator must be deployed.
(Optional) For scale down operations, you can specify the range of node IDs to use in the operation.
If you have assigned a range of node IDs for the operation, the ID of the node being removed is determined by the sequence of nodes given. Otherwise, the node with the highest available ID in the node pool is removed.
Procedure
Reassign the partitions before decreasing the number of nodes in the node pool.
Before scaling down a node pool, you can use the Cruise Control
remove-brokers
mode to move partition replicas off the brokers that are going to be removed.After the reassignment process is complete, and the node being removed has no live partitions, reduce the number of Kafka nodes in the node pool.
For example, node pool
pool-a
has four replicas. We remove a node by decreasing the number of replicas:oc scale kafkanodepool pool-a --replicas=3
Output shows three Kafka nodes in the node pool
NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0
8.3.4. (Preview) Moving nodes between node pools
This procedure describes how to move nodes between source and target Kafka node pools without downtime. You create a new node on the target node pool and reassign partitions to move data from the old node on the source node pool. When the replicas on the new node are in-sync, you can delete the old node.
In this procedure, we start with two node pools:
-
pool-a
with three replicas is the target node pool -
pool-b
with four replicas is the source node pool
We scale up pool-a
, and reassign partitions and scale down pool-b
, which results in the following:
-
pool-a
with four replicas -
pool-b
with three replicas
During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID.
Prerequisites
- The Cluster Operator must be deployed.
(Optional) For scale up and scale down operations, you can specify the range of node IDs to use.
If you have assigned node IDs for the operation, the ID of the node being added or removed is determined by the sequence of nodes given. Otherwise, the lowest available node ID across the cluster is used when adding nodes; and the node with the highest available ID in the node pool is removed.
Procedure
Create a new node in the target node pool.
For example, node pool
pool-a
has three replicas. We add a node by increasing the number of replicas:oc scale kafkanodepool pool-a --replicas=4
Check the status of the deployment and wait for the pods in the node pool to be created and have a status of
READY
.oc get pods -n <my_cluster_operator_namespace>
Output shows four Kafka nodes in the target node pool
NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-4 1/1 Running 0 my-cluster-pool-a-kafka-5 1/1 Running 0
Node IDs are appended to the name of the node on creation. We add node
my-cluster-pool-a-kafka-5
, which has a node ID of5
.Reassign the partitions from the old node to the new node.
Before scaling down the source node pool, you can use the Cruise Control
remove-brokers
mode to move partition replicas off the brokers that are going to be removed.After the reassignment process is complete, reduce the number of Kafka nodes in the source node pool.
For example, node pool
pool-b
has four replicas. We remove a node by decreasing the number of replicas:oc scale kafkanodepool pool-b --replicas=3
The node with the highest ID within a pool is removed.
Output shows three Kafka nodes in the source node pool
NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-6 1/1 Running 0
8.3.5. (Preview) Migrating existing Kafka clusters to use Kafka node pools
This procedure describes how to migrate existing Kafka clusters to use Kafka node pools. After you have updated the Kafka cluster, you can use the node pools to manage the configuration of nodes within each pool.
While the KafkaNodePools
feature gate that enables node pools is in alpha phase, replica and storage configuration in the KafkaNodePool
resource must also be present in the Kafka
resource. The configuration is ignored when node pools are being used.
Prerequisites
Procedure
Create a new
KafkaNodePool
resource.-
Name the resource
kafka
. -
Point a
strimzi.io/cluster
label to your existingKafka
resource. - Set the replica count and storage configuration to match your current Kafka cluster.
-
Set the roles to
broker
.
Example configuration for a node pool used in migrating a Kafka cluster
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false
-
Name the resource
Apply the
KafkaNodePool
resource:oc apply -f <node_pool_configuration_file>
By applying this resource, you switch Kafka to using node pools.
There is no change or rolling update and resources are identical to how they were before.
Update the
STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration to include+KafkaNodePools
.env: - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools
Enable the
KafkaNodePools
feature gate in theKafka
resource using thestrimzi.io/node-pools: enabled
annotation.Example configuration for a node pool in a cluster using ZooKeeper
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: version: 3.5.0 replicas: 3 # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false
Apply the
Kafka
resource:oc apply -f <kafka_configuration_file>
8.4. Configuring the Entity Operator
Use the entityOperator
property in Kafka.spec
to configure the Entity Operator. The Entity Operator is responsible for managing Kafka-related entities in a running Kafka cluster. It comprises the following operators:
- Topic Operator to manage Kafka topics
- User Operator to manage Kafka users
By configuring the Kafka
resource, the Cluster Operator can deploy the Entity Operator, including one or both operators. Once deployed, the operators are automatically configured to handle the topics and users of the Kafka cluster.
Each operator can only monitor a single namespace. For more information, see Section 1.2.1, “Watching AMQ Streams resources in OpenShift namespaces”.
The entityOperator
property supports several sub-properties:
-
tlsSidecar
-
topicOperator
-
userOperator
-
template
The tlsSidecar
property contains the configuration of the TLS sidecar container, which is used to communicate with ZooKeeper.
The template
property contains the configuration of the Entity Operator pod, such as labels, annotations, affinity, and tolerations. For more information on configuring templates, see Section 8.16, “Customizing OpenShift resources”.
The topicOperator
property contains the configuration of the Topic Operator. When this option is missing, the Entity Operator is deployed without the Topic Operator.
The userOperator
property contains the configuration of the User Operator. When this option is missing, the Entity Operator is deployed without the User Operator.
For more information on the properties used to configure the Entity Operator, see the EntityUserOperatorSpec
schema reference.
Example of basic configuration enabling both operators
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {}
If an empty object ({}
) is used for the topicOperator
and userOperator
, all properties use their default values.
When both topicOperator
and userOperator
properties are missing, the Entity Operator is not deployed.
8.4.1. Configuring the Topic Operator
Use topicOperator
properties in Kafka.spec.entityOperator
to configure the Topic Operator.
If you are using the preview of unidirectional topic management, the following properties are not used and will be ignored: Kafka.spec.entityOperator.topicOperator.zookeeperSessionTimeoutSeconds
and Kafka.spec.entityOperator.topicOperator.topicMetadataMaxAttempts
. For more information on unidirectional topic management, refer to Section 9.1, “Topic management modes”.
The following properties are supported:
watchedNamespace
-
The OpenShift namespace in which the Topic Operator watches for
KafkaTopic
resources. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds
-
The interval between periodic reconciliations in seconds. Default
120
. zookeeperSessionTimeoutSeconds
-
The ZooKeeper session timeout in seconds. Default
18
. topicMetadataMaxAttempts
-
The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential back-off. Consider increasing this value when topic creation might take more time due to the number of partitions or replicas. Default
6
. image
-
The
image
property can be used to configure the container image which will be used. To learn more, refer to the information provided on configuring theimage
property`. resources
-
The
resources
property configures the amount of resources allocated to the Topic Operator. You can specify requests and limits formemory
andcpu
resources. The requests should be enough to ensure a stable performance of the operator. logging
-
The
logging
property configures the logging of the Topic Operator. To learn more, refer to the information provided on Topic Operator logging.
Example Topic Operator configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi # ...
8.4.2. Configuring the User Operator
Use userOperator
properties in Kafka.spec.entityOperator
to configure the User Operator. The following properties are supported:
watchedNamespace
-
The OpenShift namespace in which the User Operator watches for
KafkaUser
resources. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds
-
The interval between periodic reconciliations in seconds. Default
120
. image
-
The
image
property can be used to configure the container image which will be used. To learn more, refer to the information provided on configuring theimage
property`. resources
-
The
resources
property configures the amount of resources allocated to the User Operator. You can specify requests and limits formemory
andcpu
resources. The requests should be enough to ensure a stable performance of the operator. logging
-
The
logging
property configures the logging of the User Operator. To learn more, refer to the information provided on User Operator logging. secretPrefix
-
The
secretPrefix
property adds a prefix to the name of all Secrets created from the KafkaUser resource. For example,secretPrefix: kafka-
would prefix all Secret names withkafka-
. So a KafkaUser namedmy-user
would create a Secret namedkafka-my-user
.
Example User Operator configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi # ...
8.5. Configuring the Cluster Operator
Use environment variables to configure the Cluster Operator. Specify the environment variables for the container image of the Cluster Operator in its Deployment
configuration file.
The Deployment
configuration file provided with the AMQ Streams release artifacts is install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
.
You can use the following environment variables to configure the Cluster Operator. If you are running Cluster Operator replicas in standby mode, there are additional environment variables for enabling leader election.
STRIMZI_NAMESPACE
A comma-separated list of namespaces that the operator operates in. When not set, set to empty string, or set to
*
, the Cluster Operator operates in all namespaces.The Cluster Operator deployment might use the downward API to set this automatically to the namespace the Cluster Operator is deployed in.
Example configuration for Cluster Operator namespaces
env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
STRIMZI_FULL_RECONCILIATION_INTERVAL_MS
- Optional, default is 120000 ms. The interval between periodic reconciliations, in milliseconds.
STRIMZI_OPERATION_TIMEOUT_MS
- Optional, default 300000 ms. The timeout for internal operations, in milliseconds. Increase this value when using AMQ Streams on clusters where regular OpenShift operations take longer than usual (because of slow downloading of Docker images, for example).
STRIMZI_ZOOKEEPER_ADMIN_SESSION_TIMEOUT_MS
-
Optional, default 10000 ms. The session timeout for the Cluster Operator’s ZooKeeper admin client, in milliseconds. Increase the value if ZooKeeper requests from the Cluster Operator are regularly failing due to timeout issues. There is a maximum allowed session time set on the ZooKeeper server side via the
maxSessionTimeout
config. By default, the maximum session timeout value is 20 times the defaulttickTime
(whose default is 2000) at 40000 ms. If you require a higher timeout, change themaxSessionTimeout
ZooKeeper server configuration value. STRIMZI_OPERATIONS_THREAD_POOL_SIZE
- Optional, default 10. The worker thread pool size, which is used for various asynchronous and blocking operations that are run by the Cluster Operator.
STRIMZI_OPERATOR_NAME
- Optional, defaults to the pod’s hostname. The operator name identifies the AMQ Streams instance when emitting OpenShift events.
STRIMZI_OPERATOR_NAMESPACE
The name of the namespace where the Cluster Operator is running. Do not configure this variable manually. Use the downward API.
env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
STRIMZI_OPERATOR_NAMESPACE_LABELS
Optional. The labels of the namespace where the AMQ Streams Cluster Operator is running. Use namespace labels to configure the namespace selector in network policies. Network policies allow the AMQ Streams Cluster Operator access only to the operands from the namespace with these labels. When not set, the namespace selector in network policies is configured to allow access to the Cluster Operator from any namespace in the OpenShift cluster.
env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2
STRIMZI_LABELS_EXCLUSION_PATTERN
Optional, default regex pattern is
^app.kubernetes.io/(?!part-of).*
. The regex exclusion pattern used to filter labels propagation from the main custom resource to its subresources. The labels exclusion filter is not applied to labels in template sections such asspec.kafka.template.pod.metadata.labels
.env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: "^key1.*"
STRIMZI_CUSTOM_{COMPONENT_NAME}_LABELS
Optional. One or more custom labels to apply to all the pods created by the
{COMPONENT_NAME}
custom resource. The Cluster Operator labels the pods when the custom resource is created or is next reconciled.Labels can be applied to the following components:
-
KAFKA
-
KAFKA_CONNECT
-
KAFKA_CONNECT_BUILD
-
ZOOKEEPER
-
ENTITY_OPERATOR
-
KAFKA_MIRROR_MAKER2
-
KAFKA_MIRROR_MAKER
-
CRUISE_CONTROL
-
KAFKA_BRIDGE
-
KAFKA_EXPORTER
-
STRIMZI_CUSTOM_RESOURCE_SELECTOR
Optional. The label selector to filter the custom resources handled by the Cluster Operator. The operator will operate only on those custom resources that have the specified labels set. Resources without these labels will not be seen by the operator. The label selector applies to
Kafka
,KafkaConnect
,KafkaBridge
,KafkaMirrorMaker
, andKafkaMirrorMaker2
resources.KafkaRebalance
andKafkaConnector
resources are operated only when their corresponding Kafka and Kafka Connect clusters have the matching labels.env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2
STRIMZI_KAFKA_IMAGES
-
Required. The mapping from the Kafka version to the corresponding Docker image containing a Kafka broker for that version. The required syntax is whitespace or comma-separated
<version>=<image>
pairs. For example3.4.0=registry.redhat.io/amq-streams/kafka-34-rhel8:2.5.1, 3.5.0=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.1
. This is used when aKafka.spec.kafka.version
property is specified but not theKafka.spec.kafka.image
in theKafka
resource. STRIMZI_DEFAULT_KAFKA_INIT_IMAGE
-
Optional, default
registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.1
. The image name to use as default for the init container if no image is specified as thekafka-init-image
in theKafka
resource. The init container is started before the broker for initial configuration work, such as rack support. STRIMZI_KAFKA_CONNECT_IMAGES
-
Required. The mapping from the Kafka version to the corresponding Docker image of Kafka Connect for that version. The required syntax is whitespace or comma-separated
<version>=<image>
pairs. For example3.4.0=registry.redhat.io/amq-streams/kafka-34-rhel8:2.5.1, 3.5.0=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.1
. This is used when aKafkaConnect.spec.version
property is specified but not theKafkaConnect.spec.image
. STRIMZI_KAFKA_MIRROR_MAKER_IMAGES
-
Required. The mapping from the Kafka version to the corresponding Docker image of MirrorMaker for that version. The required syntax is whitespace or comma-separated
<version>=<image>
pairs. For example3.4.0=registry.redhat.io/amq-streams/kafka-34-rhel8:2.5.1, 3.5.0=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.1
. This is used when aKafkaMirrorMaker.spec.version
property is specified but not theKafkaMirrorMaker.spec.image
. STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE
-
Optional, default
registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.1
. The image name to use as the default when deploying the Topic Operator if no image is specified as theKafka.spec.entityOperator.topicOperator.image
in theKafka
resource. STRIMZI_DEFAULT_USER_OPERATOR_IMAGE
-
Optional, default
registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.1
. The image name to use as the default when deploying the User Operator if no image is specified as theKafka.spec.entityOperator.userOperator.image
in theKafka
resource. STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE
-
Optional, default
registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.1
. The image name to use as the default when deploying the sidecar container for the Entity Operator if no image is specified as theKafka.spec.entityOperator.tlsSidecar.image
in theKafka
resource. The sidecar provides TLS support. STRIMZI_IMAGE_PULL_POLICY
-
Optional. The
ImagePullPolicy
that is applied to containers in all pods managed by the Cluster Operator. The valid values areAlways
,IfNotPresent
, andNever
. If not specified, the OpenShift defaults are used. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_IMAGE_PULL_SECRETS
-
Optional. A comma-separated list of
Secret
names. The secrets referenced here contain the credentials to the container registries where the container images are pulled from. The secrets are specified in theimagePullSecrets
property for all pods created by the Cluster Operator. Changing this list results in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_KUBERNETES_VERSION
Optional. Overrides the OpenShift version information detected from the API server.
Example configuration for OpenShift version override
env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64
KUBERNETES_SERVICE_DNS_DOMAIN
Optional. Overrides the default OpenShift DNS domain name suffix.
By default, services assigned in the OpenShift cluster have a DNS domain name that uses the default suffix
cluster.local
.For example, for broker kafka-0:
<cluster-name>-kafka-0.<cluster-name>-kafka-brokers.<namespace>.svc.cluster.local
The DNS domain name is added to the Kafka broker certificates used for hostname verification.
If you are using a different DNS domain name suffix in your cluster, change the
KUBERNETES_SERVICE_DNS_DOMAIN
environment variable from the default to the one you are using in order to establish a connection with the Kafka brokers.STRIMZI_CONNECT_BUILD_TIMEOUT_MS
- Optional, default 300000 ms. The timeout for building new Kafka Connect images with additional connectors, in milliseconds. Consider increasing this value when using AMQ Streams to build container images containing many connectors or using a slow container registry.
STRIMZI_NETWORK_POLICY_GENERATION
Optional, default
true
. Network policy for resources. Network policies allow connections between Kafka components.Set this environment variable to
false
to disable network policy generation. You might do this, for example, if you want to use custom network policies. Custom network policies allow more control over maintaining the connections between components.STRIMZI_DNS_CACHE_TTL
-
Optional, default
30
. Number of seconds to cache successful name lookups in local DNS resolver. Any negative value means cache forever. Zero means do not cache, which can be useful for avoiding connection errors due to long caching policies being applied. STRIMZI_POD_SET_RECONCILIATION_ONLY
-
Optional, default
false
. When set totrue
, the Cluster Operator reconciles only theStrimziPodSet
resources and any changes to the other custom resources (Kafka
,KafkaConnect
, and so on) are ignored. This mode is useful for ensuring that your pods are recreated if needed, but no other changes happen to the clusters. STRIMZI_FEATURE_GATES
- Optional. Enables or disables the features and functionality controlled by feature gates.
STRIMZI_POD_SECURITY_PROVIDER_CLASS
-
Optional. Configuration for the pluggable
PodSecurityProvider
class, which can be used to provide the security context configuration for Pods and containers.
8.5.1. Restricting access to the Cluster Operator using network policy
Use the STRIMZI_OPERATOR_NAMESPACE_LABELS
environment variable to establish network policy for the Cluster Operator using namespace labels.
The Cluster Operator can run in the same namespace as the resources it manages, or in a separate namespace. By default, the STRIMZI_OPERATOR_NAMESPACE
environment variable is configured to use the downward API to find the namespace the Cluster Operator is running in. If the Cluster Operator is running in the same namespace as the resources, only local access is required and allowed by AMQ Streams.
If the Cluster Operator is running in a separate namespace to the resources it manages, any namespace in the OpenShift cluster is allowed access to the Cluster Operator unless network policy is configured. By adding namespace labels, access to the Cluster Operator is restricted to the namespaces specified.
Network policy configured for the Cluster Operator deployment
#... env: # ... - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #...
8.5.2. Configuring periodic reconciliation by the Cluster Operator
Use the STRIMZI_FULL_RECONCILIATION_INTERVAL_MS
variable to set the time interval for periodic reconciliations by the Cluster Operator. Replace its value with the required interval in milliseconds.
Reconciliation period configured for the Cluster Operator deployment
#... env: # ... - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: "120000" #...
The Cluster Operator reacts to all notifications about applicable cluster resources received from the OpenShift cluster. If the operator is not running, or if a notification is not received for any reason, resources will get out of sync with the state of the running OpenShift cluster. In order to handle failovers properly, a periodic reconciliation process is executed by the Cluster Operator so that it can compare the state of the resources with the current cluster deployments in order to have a consistent state across all of them.
Additional resources
8.5.3. Running multiple Cluster Operator replicas with leader election
The default Cluster Operator configuration enables leader election to run multiple parallel replicas of the Cluster Operator. One replica is elected as the active leader and operates the deployed resources. The other replicas run in standby mode. When the leader stops or fails, one of the standby replicas is elected as the new leader and starts operating the deployed resources.
By default, AMQ Streams runs with a single Cluster Operator replica that is always the leader replica. When a single Cluster Operator replica stops or fails, OpenShift starts a new replica.
Running the Cluster Operator with multiple replicas is not essential. But it’s useful to have replicas on standby in case of large-scale disruptions caused by major failure. For example, suppose multiple worker nodes or an entire availability zone fails. This failure might cause the Cluster Operator pod and many Kafka pods to go down at the same time. If subsequent pod scheduling causes congestion through lack of resources, this can delay operations when running a single Cluster Operator.
8.5.3.1. Enabling leader election for Cluster Operator replicas
Configure leader election environment variables when running additional Cluster Operator replicas. The following environment variables are supported:
STRIMZI_LEADER_ELECTION_ENABLED
-
Optional, disabled (
false
) by default. Enables or disables leader election, which allows additional Cluster Operator replicas to run on standby.
Leader election is disabled by default. It is only enabled when applying this environment variable on installation.
STRIMZI_LEADER_ELECTION_LEASE_NAME
-
Required when leader election is enabled. The name of the OpenShift
Lease
resource that is used for the leader election. STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE
Required when leader election is enabled. The namespace where the OpenShift
Lease
resource used for leader election is created. You can use the downward API to configure it to the namespace where the Cluster Operator is deployed.env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
STRIMZI_LEADER_ELECTION_IDENTITY
Required when leader election is enabled. Configures the identity of a given Cluster Operator instance used during the leader election. The identity must be unique for each operator instance. You can use the downward API to configure it to the name of the pod where the Cluster Operator is deployed.
env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name
STRIMZI_LEADER_ELECTION_LEASE_DURATION_MS
- Optional, default 15000 ms. Specifies the duration the acquired lease is valid.
STRIMZI_LEADER_ELECTION_RENEW_DEADLINE_MS
- Optional, default 10000 ms. Specifies the period the leader should try to maintain leadership.
STRIMZI_LEADER_ELECTION_RETRY_PERIOD_MS
- Optional, default 2000 ms. Specifies the frequency of updates to the lease lock by the leader.
8.5.3.2. Configuring Cluster Operator replicas
To run additional Cluster Operator replicas in standby mode, you will need to increase the number of replicas and enable leader election. To configure leader election, use the leader election environment variables.
To make the required changes, configure the following Cluster Operator installation files located in install/cluster-operator/
:
- 060-Deployment-strimzi-cluster-operator.yaml
- 022-ClusterRole-strimzi-cluster-operator-role.yaml
- 022-RoleBinding-strimzi-cluster-operator.yaml
Leader election has its own ClusterRole
and RoleBinding
RBAC resources that target the namespace where the Cluster Operator is running, rather than the namespace it is watching.
The default deployment configuration creates a Lease
resource called strimzi-cluster-operator
in the same namespace as the Cluster Operator. The Cluster Operator uses leases to manage leader election. The RBAC resources provide the permissions to use the Lease
resource. If you use a different Lease
name or namespace, update the ClusterRole
and RoleBinding
files accordingly.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the Deployment
resource that is used to deploy the Cluster Operator, which is defined in the 060-Deployment-strimzi-cluster-operator.yaml
file.
Change the
replicas
property from the default (1) to a value that matches the required number of replicas.Increasing the number of Cluster Operator replicas
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3
Check that the leader election
env
properties are set.If they are not set, configure them.
To enable leader election,
STRIMZI_LEADER_ELECTION_ENABLED
must be set totrue
(default).In this example, the name of the lease is changed to
my-strimzi-cluster-operator
.Configuring leader election environment variables for the Cluster Operator
# ... spec containers: - name: strimzi-cluster-operator # ... env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: "true" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: "my-strimzi-cluster-operator" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name
For a description of the available environment variables, see Section 8.5.3.1, “Enabling leader election for Cluster Operator replicas”.
If you specified a different name or namespace for the
Lease
resource used in leader election, update the RBAC resources.(optional) Edit the
ClusterRole
resource in the022-ClusterRole-strimzi-cluster-operator-role.yaml
file.Update
resourceNames
with the name of theLease
resource.Updating the ClusterRole references to the lease
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator # ...
(optional) Edit the
RoleBinding
resource in the022-RoleBinding-strimzi-cluster-operator.yaml
file.Update
subjects.name
andsubjects.namespace
with the name of theLease
resource and the namespace where it was created.Updating the RoleBinding references to the lease
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject # ...
Deploy the Cluster Operator:
oc create -f install/cluster-operator -n myproject
Check the status of the deployment:
oc get deployments -n myproject
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows the correct number of replicas.
8.5.4. Configuring Cluster Operator HTTP proxy settings
If you are running a Kafka cluster behind a HTTP proxy, you can still pass data in and out of the cluster. For example, you can run Kafka Connect with connectors that push and pull data from outside the proxy. Or you can use a proxy to connect with an authorization server.
Configure the Cluster Operator deployment to specify the proxy environment variables. The Cluster Operator accepts standard proxy configuration (HTTP_PROXY
, HTTPS_PROXY
and NO_PROXY
) as environment variables. The proxy settings are applied to all AMQ Streams containers.
The format for a proxy address is http://<ip_address>:<port_number>. To set up a proxy with a name and password, the format is http://<username>:<password>@<ip-address>:<port_number>.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
To add proxy environment variables to the Cluster Operator, update its
Deployment
configuration (install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
).Example proxy configuration for the Cluster Operator
apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: # ... - name: "HTTP_PROXY" value: "http://proxy.com" 1 - name: "HTTPS_PROXY" value: "https://proxy.com" 2 - name: "NO_PROXY" value: "internal.com, other.domain.com" 3 # ...
Alternatively, edit the
Deployment
directly:oc edit deployment strimzi-cluster-operator
If you updated the YAML file instead of editing the
Deployment
directly, apply the changes:oc create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
Additional resources
8.5.5. Disabling FIPS mode using Cluster Operator configuration
AMQ Streams automatically switches to FIPS mode when running on a FIPS-enabled OpenShift cluster. Disable FIPS mode by setting the FIPS_MODE
environment variable to disabled
in the deployment configuration for the Cluster Operator. With FIPS mode disabled, AMQ Streams automatically disables FIPS in the OpenJDK for all components. With FIPS mode disabled, AMQ Streams is not FIPS compliant. The AMQ Streams operators, as well as all operands, run in the same way as if they were running on an OpenShift cluster without FIPS enabled.
Procedure
To disable the FIPS mode in the Cluster Operator, update its
Deployment
configuration (install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
) and add theFIPS_MODE
environment variable.Example FIPS configuration for the Cluster Operator
apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: # ... - name: "FIPS_MODE" value: "disabled" 1 # ...
- 1
- Disables the FIPS mode.
Alternatively, edit the
Deployment
directly:oc edit deployment strimzi-cluster-operator
If you updated the YAML file instead of editing the
Deployment
directly, apply the changes:oc apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
8.6. Configuring Kafka Connect
Update the spec
properties of the KafkaConnect
custom resource to configure your Kafka Connect deployment.
Use Kafka Connect to set up external data connections to your Kafka cluster. Use the properties of the KafkaConnect
resource to configure your Kafka Connect deployment.
For a deeper understanding of the Kafka Connect cluster configuration options, refer to the AMQ Streams Custom Resource API Reference.
KafkaConnector configuration
KafkaConnector
resources allow you to create and manage connector instances for Kafka Connect in an OpenShift-native way.
In your Kafka Connect configuration, you enable KafkaConnectors for a Kafka Connect cluster by adding the strimzi.io/use-connector-resources
annotation. You can also add a build
configuration so that AMQ Streams automatically builds a container image with the connector plugins you require for your data connections. External configuration for Kafka Connect connectors is specified through the externalConfiguration
property.
To manage connectors, you can use use KafkaConnector
custom resources or the Kafka Connect REST API. KafkaConnector
resources must be deployed to the same namespace as the Kafka Connect cluster they link to. For more information on using these methods to create, reconfigure, or delete connectors, see Adding connectors.
Connector configuration is passed to Kafka Connect as part of an HTTP request and stored within Kafka itself. ConfigMaps and Secrets are standard OpenShift resources used for storing configurations and confidential data. You can use ConfigMaps and Secrets to configure certain elements of a connector. You can then reference the configuration values in HTTP REST commands, which keeps the configuration separate and more secure, if needed. This method applies especially to confidential data, such as usernames, passwords, or certificates.
Handling high volumes of messages
You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages.
Example KafkaConnect
custom resource configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 2 spec: replicas: 3 3 authentication: 4 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source bootstrapServers: my-cluster-kafka-bootstrap:9092 5 tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt config: 7 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 build: 8 output: 9 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 10 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.1.3.Final/debezium-connector-postgres-2.1.3.Final-plugin.tar.gz sha512sum: c4ddc97846de561755dc0b021a62aba656098829c70eb3ade3b817ce06d852ca12ae50c0281cc791a5a131cb7fc21fb15f4b8ee76c6cae5dd07f9c11cb7c6e79 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.11.5/camel-telegram-kafka-connector-0.11.5-package.tar.gz sha512sum: d6d9f45e0d1dbfcc9f6d1c7ca2046168c764389c78bc4b867dab32d24f710bb74ccf2a007d7d7a8af2dfca09d9a52ccbc2831fc715c195a3634cca055185bd91 externalConfiguration: 11 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey resources: 12 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 13 type: inline loggers: log4j.rootLogger: INFO readinessProbe: 14 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 15 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 16 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 17 rack: topologyKey: topology.kubernetes.io/zone 18 template: 19 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 20 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 21
- 1
- Use
KafkaConnect
. - 2
- Enables KafkaConnectors for the Kafka Connect cluster.
- 3
- The number of replica nodes for the workers that run tasks.
- 4
- Authentication for the Kafka Connect cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. By default, Kafka Connect connects to Kafka brokers using a plain text connection.
- 5
- Bootstrap server for connection to the Kafka cluster.
- 6
- TLS encryption with key names under which TLS certificates are stored in X.509 format for the cluster. If certificates are stored in the same secret, it can be listed multiple times.
- 7
- Kafka Connect configuration of workers (not connectors). Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams.
- 8
- Build configuration properties for building a container image with connector plugins automatically.
- 9
- (Required) Configuration of the container registry where new images are pushed.
- 10
- (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one
artifact
. - 11
- External configuration for connectors using environment variables, as shown here, or volumes. You can also use configuration provider plugins to load configuration values from external sources.
- 12
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed. - 13
- Specified Kafka Connect loggers and log levels added directly (
inline
) or indirectly (external
) through a ConfigMap. A custom Log4j configuration must be placed under thelog4j.properties
orlog4j2.properties
key in the ConfigMap. For the Kafka Connectlog4j.rootLogger
logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. - 14
- Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- 15
- Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under
metricsConfig.valueFrom.configMapKeyRef.key
. - 16
- JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka Connect.
- 17
- ADVANCED OPTION: Container image configuration, which is recommended only in special situations.
- 18
- SPECIALIZED OPTION: Rack awareness configuration for the deployment. This is a specialized option intended for a deployment within the same location, not across regions. Use this option if you want connectors to consume from the closest replica rather than the leader replica. In certain cases, consuming from the closest replica can improve network utilization or reduce costs . The
topologyKey
must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standardtopology.kubernetes.io/zone
label. To consume from the closest replica, enable theRackAwareReplicaSelector
in the Kafka broker configuration. - 19
- Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname.
- 20
- Environment variables are set for distributed tracing.
- 21
- Distributed tracing is enabled by using OpenTelemetry.
8.6.1. Configuring Kafka Connect user authorization
This procedure describes how to authorize user access to Kafka Connect.
When any type of authorization is being used in Kafka, a Kafka Connect user requires read/write access rights to the consumer group and the internal topics of Kafka Connect.
The properties for the consumer group and internal topics are automatically configured by AMQ Streams, or they can be specified explicitly in the spec
of the KafkaConnect
resource.
Example configuration properties in the KafkaConnect
resource
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # ... # ...
This procedure shows how access is provided when simple
authorization is being used.
Simple authorization uses ACL rules, handled by the Kafka AclAuthorizer
plugin, to provide the right level of access. For more information on configuring a KafkaUser
resource to use simple authorization, see the AclRule
schema reference.
The default values for the consumer group and topics will differ when running multiple instances.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
authorization
property in theKafkaUser
resource to provide access rights to the user.In the following example, access rights are configured for the Kafka Connect topics and consumer group using
literal
name values:Property Name offset.storage.topic
connect-cluster-offsets
status.storage.topic
connect-cluster-status
config.storage.topic
connect-cluster-configs
group
connect-cluster
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: "*" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: "*" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: "*" # consumer group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: "*"
Create or update the resource.
oc apply -f KAFKA-USER-CONFIG-FILE
8.7. Configuring Kafka MirrorMaker 2
Update the spec
properties of the KafkaMirrorMaker2
custom resource to configure your MirrorMaker 2 deployment. MirrorMaker 2 uses source cluster configuration for data consumption and target cluster configuration for data output.
MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters.
You configure MirrorMaker 2 to define the Kafka Connect deployment, including the connection details of the source and target clusters, and then run a set of MirrorMaker 2 connectors to make the connection.
MirrorMaker 2 supports topic configuration synchronization between the source and target clusters. You specify source topics in the MirrorMaker 2 configuration. MirrorMaker 2 monitors the source topics. MirrorMaker 2 detects and propagates changes to the source topics to the remote topics. Changes might include automatically creating missing topics and partitions.
In most cases you write to local topics and read from remote topics. Though write operations are not prevented on remote topics, they should be avoided.
The configuration must specify:
- Each Kafka cluster
- Connection information for each cluster, including authentication
The replication flow and direction
- Cluster to cluster
- Topic to topic
For a deeper understanding of the Kafka MirrorMaker 2 cluster configuration options, refer to the AMQ Streams Custom Resource API Reference.
MirrorMaker 2 resource configuration differs from the previous version of MirrorMaker, which is now deprecated. There is currently no legacy support, so any resources must be manually converted into the new format.
Default configuration
MirrorMaker 2 provides default configuration values for properties such as replication factors. A minimal configuration, with defaults left unchanged, would be something like this example:
Minimal configuration for MirrorMaker 2
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.5.0 connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-source" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: "my-cluster-target" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: {}
You can configure access control for source and target clusters using mTLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and mTLS authentication for the source and target cluster.
You can specify the topics and consumer groups you wish to replicate from a source cluster in the KafkaMirrorMaker2
resource. You use the topicsPattern
and groupsPattern
properties to do this. You can provide a list of names or use a regular expression. By default, all topics and consumer groups are replicated if you do not set the topicsPattern
and groupsPattern
properties. You can also replicate all topics and consumer groups by using ".*"
as a regular expression. However, try to specify only the topics and consumer groups you need to avoid causing any unnecessary extra load on the cluster.
Handling high volumes of messages
You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages.
Example KafkaMirrorMaker2
custom resource configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.5.0 1 replicas: 3 2 connectCluster: "my-cluster-target" 3 clusters: 4 - alias: "my-cluster-source" 5 authentication: 6 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 7 tls: 8 trustedCertificates: - certificate: ca.crt secretName: my-cluster-source-cluster-ca-cert - alias: "my-cluster-target" 9 authentication: 10 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 11 config: 12 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 tls: 13 trustedCertificates: - certificate: ca.crt secretName: my-cluster-target-cluster-ca-cert mirrors: 14 - sourceCluster: "my-cluster-source" 15 targetCluster: "my-cluster-target" 16 sourceConnector: 17 tasksMax: 10 18 autoRestart: 19 enabled: true config: replication.factor: 1 20 offset-syncs.topic.replication.factor: 1 21 sync.topic.acls.enabled: "false" 22 refresh.topics.interval.seconds: 60 23 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" 24 heartbeatConnector: 25 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 26 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" checkpointConnector: 27 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 28 refresh.groups.interval.seconds: 600 29 sync.group.offsets.enabled: true 30 sync.group.offsets.interval.seconds: 60 31 emit.checkpoints.interval.seconds: 60 32 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" topicsPattern: "topic1|topic2|topic3" 33 groupsPattern: "group1|group2|group3" 34 resources: 35 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 36 type: inline loggers: connect.root.logger.level: INFO readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 38 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 39 rack: topologyKey: topology.kubernetes.io/zone 40 template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 43 externalConfiguration: 44 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey
- 1
- The Kafka Connect and Mirror Maker 2.0 version, which will always be the same.
- 2
- The number of replica nodes for the workers that run tasks.
- 3
- Kafka cluster alias for Kafka Connect, which must specify the target Kafka cluster. The Kafka cluster is used by Kafka Connect for its internal topics.
- 4
- Specification for the Kafka clusters being synchronized.
- 5
- Cluster alias for the source Kafka cluster.
- 6
- Authentication for the source cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN.
- 7
- Bootstrap server for connection to the source Kafka cluster.
- 8
- TLS encryption with key names under which TLS certificates are stored in X.509 format for the source Kafka cluster. If certificates are stored in the same secret, it can be listed multiple times.
- 9
- Cluster alias for the target Kafka cluster.
- 10
- Authentication for the target Kafka cluster is configured in the same way as for the source Kafka cluster.
- 11
- Bootstrap server for connection to the target Kafka cluster.
- 12
- Kafka Connect configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams.
- 13
- TLS encryption for the target Kafka cluster is configured in the same way as for the source Kafka cluster.
- 14
- MirrorMaker 2 connectors.
- 15
- Cluster alias for the source cluster used by the MirrorMaker 2 connectors.
- 16
- Cluster alias for the target cluster used by the MirrorMaker 2 connectors.
- 17
- Configuration for the
MirrorSourceConnector
that creates remote topics. Theconfig
overrides the default configuration options. - 18
- The maximum number of tasks that the connector may create. Tasks handle the data replication and run in parallel. If the infrastructure supports the processing overhead, increasing this value can improve throughput. Kafka Connect distributes the tasks between members of the cluster. If there are more tasks than workers, workers are assigned multiple tasks. For sink connectors, aim to have one task for each topic partition consumed. For source connectors, the number of tasks that can run in parallel may also depend on the external system. The connector creates fewer than the maximum number of tasks if it cannot achieve the parallelism.
- 19
- Enables automatic restarts of failed connectors and tasks. Up to seven restart attempts are made, after which restarts must be made manually.
- 20
- Replication factor for mirrored topics created at the target cluster.
- 21
- Replication factor for the
MirrorSourceConnector
offset-syncs
internal topic that maps the offsets of the source and target clusters. - 22
- When ACL rules synchronization is enabled, ACLs are applied to synchronized topics. The default is
true
. This feature is not compatible with the User Operator. If you are using the User Operator, set this property tofalse
. - 23
- Optional setting to change the frequency of checks for new topics. The default is for a check every 10 minutes.
- 24
- Adds a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is useful for active/passive backups and data migration. The property must be specified for all connectors. For bidirectional (active/active) replication, use the
DefaultReplicationPolicy
class to automatically rename remote topics and specify thereplication.policy.separator
property for all connectors to add a custom separator. - 25
- Configuration for the
MirrorHeartbeatConnector
that performs connectivity checks. Theconfig
overrides the default configuration options. - 26
- Replication factor for the heartbeat topic created at the target cluster.
- 27
- Configuration for the
MirrorCheckpointConnector
that tracks offsets. Theconfig
overrides the default configuration options. - 28
- Replication factor for the checkpoints topic created at the target cluster.
- 29
- Optional setting to change the frequency of checks for new consumer groups. The default is for a check every 10 minutes.
- 30
- Optional setting to synchronize consumer group offsets, which is useful for recovery in an active/passive configuration. Synchronization is not enabled by default.
- 31
- If the synchronization of consumer group offsets is enabled, you can adjust the frequency of the synchronization.
- 32
- Adjusts the frequency of checks for offset tracking. If you change the frequency of offset synchronization, you might also need to adjust the frequency of these checks.
- 33
- Topic replication from the source cluster defined as a comma-separated list or regular expression pattern. The source connector replicates the specified topics. The checkpoint connector tracks offsets for the specified topics. Here we request three topics by name.
- 34
- Consumer group replication from the source cluster defined as a comma-separated list or regular expression pattern. The checkpoint connector replicates the specified consumer groups. Here we request three consumer groups by name.
- 35
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed. - 36
- Specified Kafka Connect loggers and log levels added directly (
inline
) or indirectly (external
) through a ConfigMap. A custom Log4j configuration must be placed under thelog4j.properties
orlog4j2.properties
key in the ConfigMap. For the Kafka Connectlog4j.rootLogger
logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. - 37
- Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- 38
- JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker.
- 39
- ADVANCED OPTION: Container image configuration, which is recommended only in special situations.
- 40
- SPECIALIZED OPTION: Rack awareness configuration for the deployment. This is a specialized option intended for a deployment within the same location, not across regions. Use this option if you want connectors to consume from the closest replica rather than the leader replica. In certain cases, consuming from the closest replica can improve network utilization or reduce costs . The
topologyKey
must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standardtopology.kubernetes.io/zone
label. To consume from the closest replica, enable theRackAwareReplicaSelector
in the Kafka broker configuration. - 41
- Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname.
- 42
- Environment variables are set for distributed tracing.
- 43
- Distributed tracing is enabled by using OpenTelemetry.
- 44
- External configuration for an OpenShift Secret mounted to Kafka MirrorMaker as an environment variable. You can also use configuration provider plugins to load configuration values from external sources.
8.7.1. Configuring active/active or active/passive modes
You can use MirrorMaker 2 in active/passive or active/active cluster configurations.
- active/active cluster configuration
- An active/active configuration has two active clusters replicating data bidirectionally. Applications can use either cluster. Each cluster can provide the same data. In this way, you can make the same data available in different geographical locations. As consumer groups are active in both clusters, consumer offsets for replicated topics are not synchronized back to the source cluster.
- active/passive cluster configuration
- An active/passive configuration has an active cluster replicating data to a passive cluster. The passive cluster remains on standby. You might use the passive cluster for data recovery in the event of system failure.
The expectation is that producers and consumers connect to active clusters only. A MirrorMaker 2 cluster is required at each target destination.
8.7.1.1. Bidirectional replication (active/active)
The MirrorMaker 2 architecture supports bidirectional replication in an active/active cluster configuration.
Each cluster replicates the data of the other cluster using the concept of source and remote topics. As the same topics are stored in each cluster, remote topics are automatically renamed by MirrorMaker 2 to represent the source cluster. The name of the originating cluster is prepended to the name of the topic.
Figure 8.1. Topic renaming

By flagging the originating cluster, topics are not replicated back to that cluster.
The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster.
8.7.1.2. Unidirectional replication (active/passive)
The MirrorMaker 2 architecture supports unidirectional replication in an active/passive cluster configuration.
You can use an active/passive cluster configuration to make backups or migrate data to another cluster. In this situation, you might not want automatic renaming of remote topics.
You can override automatic renaming by adding IdentityReplicationPolicy
to the source connector configuration. With this configuration applied, topics retain their original names.
8.7.2. Configuring MirrorMaker 2 connectors
Use MirrorMaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters.
MirrorMaker 2 consists of the following connectors:
MirrorSourceConnector
-
The source connector replicates topics from a source cluster to a target cluster. It also replicates ACLs and is necessary for the
MirrorCheckpointConnector
to run. MirrorCheckpointConnector
- The checkpoint connector periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the source and target cluster.
MirrorHeartbeatConnector
- The heartbeat connector periodically checks connectivity between the source and target cluster.
The following table describes connector properties and the connectors you configure to use them.
Table 8.2. MirrorMaker 2 connector configuration properties
Property | sourceConnector | checkpointConnector | heartbeatConnector |
---|---|---|---|
| ✓ | ✓ | ✓ |
| ✓ | ✓ | ✓ |
| ✓ | ✓ | ✓ |
| ✓ | ✓ | |
| ✓ | ✓ | |
| ✓ | ✓ | |
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ |
8.7.2.1. Changing the location of the consumer group offsets topic
MirrorMaker 2 tracks offsets for consumer groups using internal topics.
offset-syncs
topic-
The
offset-syncs
topic maps the source and target offsets for replicated topic partitions from record metadata. checkpoints
topic-
The
checkpoints
topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group.
As they are used internally by MirrorMaker 2, you do not interact directly with these topics.
MirrorCheckpointConnector
emits checkpoints for offset tracking. Offsets for the checkpoints
topic are tracked at predetermined intervals through configuration. Both topics enable replication to be fully restored from the correct offset position on failover.
The location of the offset-syncs
topic is the source
cluster by default. You can use the offset-syncs.topic.location
connector configuration to change this to the target
cluster. You need read/write access to the cluster that contains the topic. Using the target cluster as the location of the offset-syncs
topic allows you to use MirrorMaker 2 even if you have only read access to the source cluster.
8.7.2.2. Synchronizing consumer group offsets
The __consumer_offsets
topic stores information on committed offsets for each consumer group. Offset synchronization periodically transfers the consumer offsets for the consumer groups of a source cluster into the consumer offsets topic of a target cluster.
Offset synchronization is particularly useful in an active/passive configuration. If the active cluster goes down, consumer applications can switch to the passive (standby) cluster and pick up from the last transferred offset position.
To use topic offset synchronization, enable the synchronization by adding sync.group.offsets.enabled
to the checkpoint connector configuration, and setting the property to true
. Synchronization is disabled by default.
When using the IdentityReplicationPolicy
in the source connector, it also has to be configured in the checkpoint connector configuration. This ensures that the mirrored consumer offsets will be applied for the correct topics.
Consumer offsets are only synchronized for consumer groups that are not active in the target cluster. If the consumer groups are in the target cluster, the synchronization cannot be performed and an UNKNOWN_MEMBER_ID
error is returned.
If enabled, the synchronization of offsets from the source cluster is made periodically. You can change the frequency by adding sync.group.offsets.interval.seconds
and emit.checkpoints.interval.seconds
to the checkpoint connector configuration. The properties specify the frequency in seconds that the consumer group offsets are synchronized, and the frequency of checkpoints emitted for offset tracking. The default for both properties is 60 seconds. You can also change the frequency of checks for new consumer groups using the refresh.groups.interval.seconds
property, which is performed every 10 minutes by default.
Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages.
If you have an application written in Java, you can use the RemoteClusterUtils.java
utility to synchronize offsets through the application. The utility fetches remote offsets for a consumer group from the checkpoints
topic.
8.7.2.3. Deciding when to use the heartbeat connector
The heartbeat connector emits heartbeats to check connectivity between source and target Kafka clusters. An internal heartbeat
topic is replicated from the source cluster, which means that the heartbeat connector must be connected to the source cluster. The heartbeat
topic is located on the target cluster, which allows it to do the following:
- Identify all source clusters it is mirroring data from
- Verify the liveness and latency of the mirroring process
This helps to make sure that the process is not stuck or has stopped for any reason. While the heartbeat connector can be a valuable tool for monitoring the mirroring processes between Kafka clusters, it’s not always necessary to use it. For example, if your deployment has low network latency or a small number of topics, you might prefer to monitor the mirroring process using log messages or other monitoring tools. If you decide not to use the heartbeat connector, simply omit it from your MirrorMaker 2 configuration.
8.7.2.4. Aligning the configuration of MirrorMaker 2 connectors
To ensure that MirrorMaker 2 connectors work properly, make sure to align certain configuration settings across connectors. Specifically, ensure that the following properties have the same value across all applicable connectors:
-
replication.policy.class
-
replication.policy.separator
-
offset-syncs.topic.location
-
topic.filter.class
For example, the value for replication.policy.class
must be the same for the source, checkpoint, and heartbeat connectors. Mismatched or missing settings cause issues with data replication or offset syncing, so it’s essential to keep all relevant connectors configured with the same settings.
8.7.3. Configuring MirrorMaker 2 connector producers and consumers
MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings.
For example, you can increase the batch.size
for the source producer that sends topics to the target Kafka cluster to better accommodate large volumes of messages.
Producer and consumer configuration options depend on the MirrorMaker 2 implementation, and may be subject to change.
The following tables describe the producers and consumers for each of the connectors and where you can add configuration.
Table 8.3. Source connector producers and consumers
Type | Description | Configuration |
---|---|---|
Producer | Sends topic messages to the target Kafka cluster. Consider tuning the configuration of this producer when it is handling large volumes of data. |
|
Producer |
Writes to the |
|
Consumer | Retrieves topic messages from the source Kafka cluster. |
|
Table 8.4. Checkpoint connector producers and consumers
Type | Description | Configuration |
---|---|---|
Producer | Emits consumer offset checkpoints. |
|
Consumer |
Loads the |
|
You can set offset-syncs.topic.location
to target
to use the target Kafka cluster as the location of the offset-syncs
topic.
Table 8.5. Heartbeat connector producer
Type | Description | Configuration |
---|---|---|
Producer | Emits heartbeats. |
|
The following example shows how you configure the producers and consumers.
Example configuration for connector producers and consumers
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.5.0 # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # ... checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # ... heartbeatConnector: config: producer.override.request.timeout.ms: 30000 # ...
8.7.4. Specifying a maximum number of data replication tasks
Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. Increasing the number of tasks can help with performance issues when replicating a large number of partitions or synchronizing the offsets of a large number of consumer groups.
Tasks run in parallel. Workers are assigned one or more tasks. A single task is handled by one worker pod, so you don’t need more worker pods than tasks. If there are more tasks than workers, workers handle multiple tasks.
You can specify the maximum number of connector tasks in your MirrorMaker configuration using the tasksMax
property. Without specifying a maximum number of tasks, the default setting is a single task.
The heartbeat connector always uses a single task.
The number of tasks that are started for the source and checkpoint connectors is the lower value between the maximum number of possible tasks and the value for tasksMax
. For the source connector, the maximum number of tasks possible is one for each partition being replicated from the source cluster. For the checkpoint connector, the maximum number of tasks possible is one for each consumer group being replicated from the source cluster. When setting a maximum number of tasks, consider the number of partitions and the hardware resources that support the process.
If the infrastructure supports the processing overhead, increasing the number of tasks can improve throughput and latency. For example, adding more tasks reduces the time taken to poll the source cluster when there is a high number of partitions or consumer groups.
Increasing the number of tasks for the source connector is useful when you have a large number of partitions.
Increasing the number of tasks for the source connector
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 10 # ...
Increasing the number of tasks for the checkpoint connector is useful when you have a large number of consumer groups.
Increasing the number of tasks for the checkpoint connector
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" checkpointConnector: tasksMax: 10 # ...
By default, MirrorMaker 2 checks for new consumer groups every 10 minutes. You can adjust the refresh.groups.interval.seconds
configuration to change the frequency. Take care when adjusting lower. More frequent checks can have a negative impact on performance.
8.7.4.1. Checking connector task operations
If you are using Prometheus and Grafana to monitor your deployment, you can check MirrorMaker 2 performance. The example MirrorMaker 2 Grafana dashboard provided with AMQ Streams shows the following metrics related to tasks and latency.
- The number of tasks
- Replication latency
- Offset synchronization latency
Additional resources
8.7.5. Synchronizing ACL rules for remote topics
When using MirrorMaker 2 with AMQ Streams, it is possible to synchronize ACL rules for remote topics. However, this feature is only available if you are not using the User Operator.
If you are using type: simple
authorization without the User Operator, the ACL rules that manage access to brokers also apply to remote topics. This means that users who have read access to a source topic can also read its remote equivalent.
OAuth 2.0 authorization does not support access to remote topics in this way.
8.7.6. Securing a Kafka MirrorMaker 2 deployment
This procedure describes in outline the configuration required to secure a MirrorMaker 2 deployment.
You need separate configuration for the source Kafka cluster and the target Kafka cluster. You also need separate user configuration to provide the credentials required for MirrorMaker to connect to the source and target Kafka clusters.
For the Kafka clusters, you specify internal listeners for secure connections within an OpenShift cluster and external listeners for connections outside the OpenShift cluster.
You can configure authentication and authorization mechanisms. The security options implemented for the source and target Kafka clusters must be compatible with the security options implemented for MirrorMaker 2.
After you have created the cluster and user authentication credentials, you specify them in your MirrorMaker configuration for secure connections.
In this procedure, the certificates generated by the Cluster Operator are used, but you can replace them by installing your own certificates. You can also configure your listener to use a Kafka listener certificate managed by an external CA (certificate authority).
Before you start
Before starting this procedure, take a look at the example configuration files provided by AMQ Streams. They include examples for securing a deployment of MirrorMaker 2 using mTLS or SCRAM-SHA-512 authentication. The examples specify internal listeners for connecting within an OpenShift cluster.
The examples provide the configuration for full authorization, including all the ACLs needed by MirrorMaker 2 to allow operations on the source and target Kafka clusters.
Prerequisites
- AMQ Streams is running
- Separate namespaces for source and target clusters
The procedure assumes that the source and target Kafka clusters are installed to separate namespaces If you want to use the Topic Operator, you’ll need to do this. The Topic Operator only watches a single cluster in a specified namespace.
By separating the clusters into namespaces, you will need to copy the cluster secrets so they can be accessed outside the namespace. You need to reference the secrets in the MirrorMaker configuration.
Procedure
Configure two
Kafka
resources, one to secure the source Kafka cluster and one to secure the target Kafka cluster.You can add listener configuration for authentication and enable authorization.
In this example, an internal listener is configured for a Kafka cluster with TLS encryption and mTLS authentication. Kafka
simple
authorization is enabled.Example source Kafka cluster configuration with TLS encryption and mTLS authentication
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.5.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: "3.5" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}
Example target Kafka cluster configuration with TLS encryption and mTLS authentication
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.5.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: "3.5" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}
Create or update the
Kafka
resources in separate namespaces.oc apply -f <kafka_configuration_file> -n <namespace>
The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication within the Kafka cluster.
The certificates are created in the secret
<cluster_name>-cluster-ca-cert
.Configure two
KafkaUser
resources, one for a user of the source Kafka cluster and one for a user of the target Kafka cluster.-
Configure the same authentication and authorization types as the corresponding source and target Kafka cluster. For example, if you used
tls
authentication and thesimple
authorization type in theKafka
configuration for the source Kafka cluster, use the same in theKafkaUser
configuration. Configure the ACLs needed by MirrorMaker 2 to allow operations on the source and target Kafka clusters.
The ACLs are used by the internal MirrorMaker connectors, and by the underlying Kafka Connect framework.
Example source user configuration for mTLS authentication
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: "*" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: "*" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read
Example target user configuration for mTLS authentication
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # Underlying Kafka Connect internal topics to store configuration, offsets, or status - resource: type: group name: mirrormaker2-cluster operations: - Read - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: "*" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: "*" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write
NoteYou can use a certificate issued outside the User Operator by setting
type
totls-external
. For more information, see theKafkaUserSpec
schema reference.-
Configure the same authentication and authorization types as the corresponding source and target Kafka cluster. For example, if you used
Create or update a
KafkaUser
resource in each of the namespaces you created for the source and target Kafka clusters.oc apply -f <kafka_user_configuration_file> -n <namespace>
The User Operator creates the users representing the client (MirrorMaker), and the security credentials used for client authentication, based on the chosen authentication type.
The User Operator creates a new secret with the same name as the
KafkaUser
resource. The secret contains a private and public key for mTLS authentication. The public key is contained in a user certificate, which is signed by the clients CA.Configure a
KafkaMirrorMaker2
resource with the authentication details to connect to the source and target Kafka clusters.Example MirrorMaker 2 configuration with TLS encryption and mTLS authentication
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.5.0 replicas: 1 connectCluster: "my-target-cluster" clusters: - alias: "my-source-cluster" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert certificate: ca.crt authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: "my-target-cluster" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert certificate: ca.crt authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: "my-source-cluster" targetCluster: "my-target-cluster" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: "false" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: "true" topicsPattern: "topic1|topic2|topic3" groupsPattern: "group1|group2|group3"
- 1
- The TLS certificates for the source Kafka cluster. If they are in a separate namespace, copy the cluster secrets from the namespace of the Kafka cluster.
- 2
- The user authentication for accessing the source Kafka cluster using the TLS mechanism.
- 3
- The TLS certificates for the target Kafka cluster.
- 4
- The user authentication for accessing the target Kafka cluster.
Create or update the
KafkaMirrorMaker2
resource in the same namespace as the target Kafka cluster.oc apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster>
8.8. Configuring Kafka MirrorMaker (deprecated)
Update the spec
properties of the KafkaMirrorMaker
custom resource to configure your Kafka MirrorMaker deployment.
You can configure access control for producers and consumers using TLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and mTLS authentication on the consumer and producer side.
For a deeper understanding of the Kafka MirrorMaker cluster configuration options, refer to the AMQ Streams Custom Resource API Reference.
Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker
custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in AMQ Streams as well. The KafkaMirrorMaker
resource will be removed from AMQ Streams when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2
custom resource with the IdentityReplicationPolicy
.
Example KafkaMirrorMaker
custom resource configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: "my-group" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert certificate: ca.crt authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert certificate: ca.crt authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: "my-topic|other-topic" 10 resources: 11 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: 19 type: opentelemetry
- 1
- The number of replica nodes.
- 2
- Bootstrap servers for consumer and producer.
- 3
- Group ID for the consumer.
- 4
- The number of consumer streams.
- 5
- The offset auto-commit interval in milliseconds.
- 6
- TLS encryption with key names under which TLS certificates are stored in X.509 format for consumer or producer. If certificates are stored in the same secret, it can be listed multiple times.
- 7
- Authentication for consumer or producer, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN.
- 8
- Kafka configuration options for consumer and producer.
- 9
- If the
abortOnSendFailure
property is set totrue
, Kafka MirrorMaker will exit and the container will restart following a send failure for a message. - 10
- A list of included topics mirrored from source to target Kafka cluster.
- 11
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed. - 12
- Specified loggers and log levels added directly (
inline
) or indirectly (external
) through a ConfigMap. A custom Log4j configuration must be placed under thelog4j.properties
orlog4j2.properties
key in the ConfigMap. MirrorMaker has a single logger calledmirrormaker.root.logger
. You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. - 13
- Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- 14
- Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under
metricsConfig.valueFrom.configMapKeyRef.key
. - 15
- JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker.
- 16
- ADVANCED OPTION: Container image configuration, which is recommended only in special situations.
- 17
- Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname.
- 18
- Environment variables are set for distributed tracing.
- 19
- Distributed tracing is enabled by using OpenTelemetry.Warning
With the
abortOnSendFailure
property set tofalse
, the producer attempts to send the next message in a topic. The original message might be lost, as there is no attempt to resend a failed message.
8.9. Configuring the Kafka Bridge
Update the spec
properties of the KafkaBridge
custom resource to configure your Kafka Bridge deployment.
In order to prevent issues arising when client consumer requests are processed by different Kafka Bridge instances, address-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance. Additionally, each independent Kafka Bridge instance must have a replica. A Kafka Bridge instance has its own state which is not shared with another instances.
For a deeper understanding of the Kafka Bridge cluster configuration options, refer to the AMQ Streams Custom Resource API Reference.
Example KafkaBridge
custom resource configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: replicas: 3 1 bootstrapServers: <cluster_name>-cluster-kafka-bootstrap:9092 2 tls: 3 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt authentication: 4 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key http: 5 port: 8080 cors: 6 allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" consumer: 7 config: auto.offset.reset: earliest producer: 8 config: delivery.timeout.ms: 300000 resources: 9 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 10 type: inline loggers: logger.bridge.level: INFO # enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: DEBUG jvmOptions: 11 "-Xmx": "1g" "-Xms": "1g" readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 image: my-org/my-image:latest 13 template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 16
- 1
- The number of replica nodes.
- 2
- Bootstrap server for connection to the target Kafka cluster. Use the name of the Kafka cluster as the <cluster_name>.
- 3
- TLS encryption with key names under which TLS certificates are stored in X.509 format for the source Kafka cluster. If certificates are stored in the same secret, it can be listed multiple times.
- 4
- Authentication for the Kafka Bridge cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. By default, the Kafka Bridge connects to Kafka brokers without authentication.
- 5
- HTTP access to Kafka brokers.
- 6
- CORS access specifying selected resources and access methods. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster.
- 7
- Consumer configuration options.
- 8
- Producer configuration options.
- 9
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed. - 10
- Specified Kafka Bridge loggers and log levels added directly (
inline
) or indirectly (external
) through a ConfigMap. A custom Log4j configuration must be placed under thelog4j.properties
orlog4j2.properties
key in the ConfigMap. For the Kafka Bridge loggers, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. - 11
- JVM configuration options to optimize performance for the Virtual Machine (VM) running the Kafka Bridge.
- 12
- Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- 13
- Optional: Container image configuration, which is recommended only in special situations.
- 14
- Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname.
- 15
- Environment variables are set for distributed tracing.
- 16
- Distributed tracing is enabled by using OpenTelemetry.
Additional resources
8.10. Configuring Kafka and ZooKeeper storage
As stateful applications, Kafka and ZooKeeper store data on disk. AMQ Streams supports three storage types for this data:
- Ephemeral (Recommended for development only)
- Persistent
- JBOD (Kafka only not ZooKeeper)
When configuring a Kafka
resource, you can specify the type of storage used by the Kafka broker and its corresponding ZooKeeper node. You configure the storage type using the storage
property in the following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
The storage type is configured in the type
field.
Refer to the schema reference for more information on storage configuration properties:
The storage type cannot be changed after a Kafka cluster is deployed.
8.10.1. Data storage considerations
For AMQ Streams to work well, an efficient data storage infrastructure is essential. We strongly recommend using block storage. AMQ Streams is only tested for use with block storage. File storage, such as NFS, is not tested and there is no guarantee it will work.
Choose one of the following options for your block storage:
- A cloud-based block storage solution, such as Amazon Elastic Block Store (EBS)
- Persistent storage using local persistent volumes
- Storage Area Network (SAN) volumes accessed by a protocol such as Fibre Channel or iSCSI
AMQ Streams does not require OpenShift raw block volumes.
8.10.1.1. File systems
Kafka uses a file system for storing messages. AMQ Streams is compatible with the XFS and ext4 file systems, which are commonly used with Kafka. Consider the underlying architecture and requirements of your deployment when choosing and setting up your file system.
For more information, refer to Filesystem Selection in the Kafka documentation.
8.10.1.2. Disk usage
Use separate disks for Apache Kafka and ZooKeeper.
Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. SSDs are particularly effective with ZooKeeper, which requires fast, low latency data access.
You do not need to provision replicated storage because Kafka and ZooKeeper both have built-in data replication.
8.10.2. Ephemeral storage
Ephemeral data storage is transient. All pods on a node share a local ephemeral storage space. Data is retained for as long as the pod that uses it is running. The data is lost when a pod is deleted. Although a pod can recover data in a highly available environment.
Because of its transient nature, ephemeral storage is only recommended for development and testing.
Ephemeral storage uses emptyDir
volumes to store data. An emptyDir
volume is created when a pod is assigned to a node. You can set the total amount of storage for the emptyDir
using the sizeLimit
property .
Ephemeral storage is not suitable for single-node ZooKeeper clusters or Kafka topics with a replication factor of 1.
To use ephemeral storage, you set the storage type configuration in the Kafka
or ZooKeeper
resource to ephemeral
.
Example ephemeral storage configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: ephemeral # ... zookeeper: # ... storage: type: ephemeral # ...
8.10.2.1. Mount path of Kafka log directories
The ephemeral volume is used by Kafka brokers as log directories mounted into the following path:
/var/lib/kafka/data/kafka-logIDX
Where IDX
is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0
.
8.10.3. Persistent storage
Persistent data storage retains data in the event of system disruption. For pods that use persistent data storage, data is persisted across pod failures and restarts.
A dynamic provisioning framework enables clusters to be created with persistent storage. Pod configuration uses Persistent Volume Claims (PVCs) to make storage requests on persistent volumes (PVs). PVs are storage resources that represent a storage volume. PVs are independent of the pods that use them. The PVC requests the amount of storage required when a pod is being created. The underlying storage infrastructure of the PV does not need to be understood. If a PV matches the storage criteria, the PVC is bound to the PV.
Because of its permanent nature, persistent storage is recommended for production.
PVCs can request different types of persistent storage by specifying a StorageClass. Storage classes define storage profiles and dynamically provision PVs. If a storage class is not specified, the default storage class is used. Persistent storage options might include SAN storage types or local persistent volumes.
To use persistent storage, you set the storage type configuration in the Kafka
or ZooKeeper
resource to persistent-claim
.
In the production environment, the following configuration is recommended:
-
For Kafka, configure
type: jbod
with one or moretype: persistent-claim
volumes -
For ZooKeeper, configure
type: persistent-claim
Persistent storage also has the following configuration options:
id
(optional)-
A storage identification number. This option is mandatory for storage volumes defined in a JBOD storage declaration. Default is
0
. size
(required)- The size of the persistent volume claim, for example, "1000Gi".
class
(optional)-
The OpenShift StorageClass to use for dynamic volume provisioning. Storage
class
configuration includes parameters that describe the profile of a volume in detail. selector
(optional)- Configuration to specify a specific PV. Provides key:value pairs representing the labels of the volume selected.
deleteClaim
(optional)-
Boolean value to specify whether the PVC is deleted when the cluster is uninstalled. Default is
false
.
Increasing the size of persistent volumes in an existing AMQ Streams cluster is only supported in OpenShift versions that support persistent volume resizing. The persistent volume to be resized must use a storage class that supports volume expansion. For other versions of OpenShift and storage classes that do not support volume expansion, you must decide the necessary storage size before deploying the cluster. Decreasing the size of existing persistent volumes is not possible.
Example persistent storage configuration for Kafka and ZooKeeper
# ... spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: storage: type: persistent-claim size: 1000Gi # ...
If you do not specify a storage class, the default is used. The following example specifies a storage class.
Example persistent storage configuration with specific storage class
# ... storage: type: persistent-claim size: 1Gi class: my-storage-class # ...
Use a selector
to specify a labeled persistent volume that provides certain features, such as an SSD.
Example persistent storage configuration with selector
# ... storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true # ...
8.10.3.1. Storage class overrides
Instead of using the default storage class, you can specify a different storage class for one or more Kafka brokers or ZooKeeper nodes. This is useful, for example, when storage classes are restricted to different availability zones or data centers. You can use the overrides
field for this purpose.
In this example, the default storage class is named my-storage-class
:
Example AMQ Streams cluster using storage class overrides
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # ... kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ... # ... zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ...
As a result of the configured overrides
property, the volumes use the following storage classes:
-
The persistent volumes of ZooKeeper node 0 use
my-storage-class-zone-1a
. -
The persistent volumes of ZooKeeper node 1 use
my-storage-class-zone-1b
. -
The persistent volumes of ZooKeeepr node 2 use
my-storage-class-zone-1c
. -
The persistent volumes of Kafka broker 0 use
my-storage-class-zone-1a
. -
The persistent volumes of Kafka broker 1 use
my-storage-class-zone-1b
. -
The persistent volumes of Kafka broker 2 use
my-storage-class-zone-1c
.
The overrides
property is currently used only to override storage class configurations. Overrides for other storage configuration properties is not currently supported. Other storage configuration properties are currently not supported.
8.10.3.2. PVC resources for persistent storage
When persistent storage is used, it creates PVCs with the following names:
data-cluster-name-kafka-idx
-
PVC for the volume used for storing data for the Kafka broker pod
idx
. data-cluster-name-zookeeper-idx
-
PVC for the volume used for storing data for the ZooKeeper node pod
idx
.
8.10.3.3. Mount path of Kafka log directories
The persistent volume is used by the Kafka brokers as log directories mounted into the following path:
/var/lib/kafka/data/kafka-logIDX
Where IDX
is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0
.
8.10.4. Resizing persistent volumes
Persistent volumes used by a cluster can be resized without any risk of data loss, as long as the storage infrastructure supports it. Following a configuration update to change the size of the storage, AMQ Streams instructs the storage infrastructure to make the change. Storage expansion is supported in AMQ Streams clusters that use persistent-claim volumes.
Storage reduction is only possible when using multiple disks per broker. You can remove a disk after moving all partitions on the disk to other volumes within the same broker (intra-broker) or to other brokers within the same cluster (intra-cluster).
You cannot decrease the size of persistent volumes because it is not currently supported in OpenShift.
Prerequisites
- An OpenShift cluster with support for volume resizing.
- The Cluster Operator is running.
- A Kafka cluster using persistent volumes created using a storage class that supports volume expansion.
Procedure
Edit the
Kafka
resource for your cluster.Change the
size
property to increase the size of the persistent volume allocated to a Kafka cluster, a ZooKeeper cluster, or both.-
For Kafka clusters, update the
size
property underspec.kafka.storage
. -
For ZooKeeper clusters, update the
size
property underspec.zookeeper.storage
.
Kafka configuration to increase the volume size to
2000Gi
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: persistent-claim size: 2000Gi class: my-storage-class # ... zookeeper: # ...
-
For Kafka clusters, update the
Create or update the resource:
oc apply -f <kafka_configuration_file>
OpenShift increases the capacity of the selected persistent volumes in response to a request from the Cluster Operator. When the resizing is complete, the Cluster Operator restarts all pods that use the resized persistent volumes. This happens automatically.
Verify that the storage capacity has increased for the relevant pods on the cluster:
oc get pv
Kafka broker pods with increased storage
NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-kafka-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-kafka-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-kafka-1
The output shows the names of each PVC associated with a broker pod.
Additional resources
- For more information about resizing persistent volumes in OpenShift, see Resizing Persistent Volumes using Kubernetes.
8.10.5. JBOD storage
You can configure AMQ Streams to use JBOD, a data storage configuration of multiple disks or volumes. JBOD is one approach to providing increased data storage for Kafka brokers. It can also improve performance.
JBOD storage is supported for Kafka only not ZooKeeper.
A JBOD configuration is described by one or more volumes, each of which can be either ephemeral or persistent. The rules and constraints for JBOD volume declarations are the same as those for ephemeral and persistent storage. For example, you cannot decrease the size of a persistent storage volume after it has been provisioned, or you cannot change the value of sizeLimit
when the type is ephemeral
.
To use JBOD storage, you set the storage type configuration in the Kafka
resource to jbod
. The volumes
property allows you to describe the disks that make up your JBOD storage array or configuration.
Example JBOD storage configuration
# ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false # ...
The IDs cannot be changed once the JBOD volumes are created. You can add or remove volumes from the JBOD configuration.
8.10.5.1. PVC resource for JBOD storage
When persistent storage is used to declare JBOD volumes, it creates a PVC with the following name:
data-id-cluster-name-kafka-idx
-
PVC for the volume used for storing data for the Kafka broker pod
idx
. Theid
is the ID of the volume used for storing data for Kafka broker pod.
8.10.5.2. Mount path of Kafka log directories
The JBOD volumes are used by Kafka brokers as log directories mounted into the following path:
/var/lib/kafka/data-id/kafka-logidx
Where id
is the ID of the volume used for storing data for Kafka broker pod idx
. For example /var/lib/kafka/data-0/kafka-log0
.
8.10.6. Adding volumes to JBOD storage
This procedure describes how to add volumes to a Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type.
When adding a new volume under an id
which was already used in the past and removed, you have to make sure that the previously used PersistentVolumeClaims
have been deleted.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
- A Kafka cluster with JBOD storage
Procedure
Edit the
spec.kafka.storage.volumes
property in theKafka
resource. Add the new volumes to thevolumes
array. For example, add the new volume with id2
:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ...
Create or update the resource:
oc apply -f <kafka_configuration_file>
Create new topics or reassign existing partitions to the new disks.
TipCruise Control is an effective tool for reassigning partitions. To perform an intra-broker disk balance, you set
rebalanceDisk
totrue
under theKafkaRebalance.spec
.
8.10.7. Removing volumes from JBOD storage
This procedure describes how to remove volumes from Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. The JBOD storage always has to contain at least one volume.
To avoid data loss, you have to move all partitions before removing the volumes.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
- A Kafka cluster with JBOD storage with two or more volumes
Procedure
Reassign all partitions from the disks which are you going to remove. Any data in partitions still assigned to the disks which are going to be removed might be lost.
TipYou can use the
kafka-reassign-partitions.sh
tool to reassign the partitions.Edit the
spec.kafka.storage.volumes
property in theKafka
resource. Remove one or more volumes from thevolumes
array. For example, remove the volumes with ids1
and2
:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ...
Create or update the resource:
oc apply -f <kafka_configuration_file>
8.11. Configuring CPU and memory resource limits and requests
By default, the AMQ Streams Cluster Operator does not specify CPU and memory resource requests and limits for its deployed operands. Ensuring an adequate allocation of resources is crucial for maintaining stability and achieving optimal performance in Kafka. The ideal resource allocation depends on your specific requirements and use cases.
It is recommended to configure CPU and memory resources for each container by setting appropriate requests and limits.
8.12. Configuring pod scheduling
To avoid performance degradation caused by resource conflicts between applications scheduled on the same OpenShift node, you can schedule Kafka pods separately from critical workloads. This can be achieved by either selecting specific nodes or dedicating a set of nodes exclusively for Kafka.
8.12.1. Specifying affinity, tolerations, and topology spread constraints
Use affinity, tolerations and topology spread constraints to schedule the pods of kafka resources onto nodes. Affinity, tolerations and topology spread constraints are configured using the affinity
, tolerations
, and topologySpreadConstraint
properties in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaBridge.spec.template.pod
-
KafkaMirrorMaker.spec.template.pod
-
KafkaMirrorMaker2.spec.template.pod
The format of the affinity
, tolerations
, and topologySpreadConstraint
properties follows the OpenShift specification. The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
Additional resources
8.12.1.1. Use pod anti-affinity to avoid critical applications sharing nodes
Use pod anti-affinity to ensure that critical applications are never scheduled on the same disk. When running a Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share nodes with other workloads, such as databases.
8.12.1.2. Use node affinity to schedule workloads onto specific nodes
The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.
OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type
or custom labels to select the right node.
8.12.1.3. Use node affinity and tolerations for dedicated nodes
Use taints to create dedicated nodes, then schedule Kafka pods on the dedicated nodes by configuring node affinity and tolerations.
Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.
Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.
8.12.2. Configuring pod anti-affinity to schedule each Kafka broker on a different worker node
Many Kafka brokers or ZooKeeper nodes can run on the same OpenShift worker node. If the worker node fails, they will all become unavailable at the same time. To improve reliability, you can use podAntiAffinity
configuration to schedule each Kafka broker or ZooKeeper node on a different OpenShift worker node.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
affinity
property in the resource specifying the cluster deployment. To make sure that no worker nodes are shared by Kafka brokers or ZooKeeper nodes, use thestrimzi.io/name
label. Set thetopologyKey
tokubernetes.io/hostname
to specify that the selected pods are not scheduled on nodes with the same hostname. This will still allow the same worker node to be shared by a single Kafka broker and a single ZooKeeper node. For example:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME-kafka topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME-zookeeper topologyKey: "kubernetes.io/hostname" # ...
Where
CLUSTER-NAME
is the name of your Kafka custom resource.If you even want to make sure that a Kafka broker and ZooKeeper node do not share the same worker node, use the
strimzi.io/cluster
label. For example:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: "kubernetes.io/hostname" # ...
Where
CLUSTER-NAME
is the name of your Kafka custom resource.Create or update the resource.
oc apply -f <kafka_configuration_file>
8.12.3. Configuring pod anti-affinity in Kafka components
Pod anti-affinity configuration helps with the stability and performance of Kafka brokers. By using podAntiAffinity
, OpenShift will not schedule Kafka brokers on the same nodes as other workloads. Typically, you want to avoid Kafka running on the same worker node as other network or storage intensive applications such as databases, storage or other messaging platforms.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
affinity
property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. ThetopologyKey
should be set tokubernetes.io/hostname
to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ...
Create or update the resource.
This can be done using
oc apply
:oc apply -f <kafka_configuration_file>
8.12.4. Configuring node affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Label the nodes where AMQ Streams components should be scheduled.
This can be done using
oc label
:oc label node NAME-OF-NODE node-type=fast-network
Alternatively, some of the existing labels might be reused.
Edit the
affinity
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ...
Create or update the resource.
This can be done using
oc apply
:oc apply -f <kafka_configuration_file>
8.12.5. Setting up dedicated nodes and scheduling pods on them
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
- Select the nodes which should be used as dedicated.
- Make sure there are no workloads scheduled on these nodes.
Set the taints on the selected nodes:
This can be done using
oc adm taint
:oc adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule
Additionally, add a label to the selected nodes as well.
This can be done using
oc label
:oc label node NAME-OF-NODE dedicated=Kafka
Edit the
affinity
andtolerations
properties in the resource specifying the cluster deployment.For example:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ...
Create or update the resource.
This can be done using
oc apply
:oc apply -f <kafka_configuration_file>
8.13. Configuring logging levels
Configure logging levels in the custom resources of Kafka components and AMQ Streams operators. You can specify the logging levels directly in the spec.logging
property of the custom resource. Or you can define the logging properties in a ConfigMap that’s referenced in the custom resource using the configMapKeyRef
property.
The advantages of using a ConfigMap are that the logging properties are maintained in one place and are accessible to more than one resource. You can also reuse the ConfigMap for more than one resource. If you are using a ConfigMap to specify loggers for AMQ Streams Operators, you can also append the logging specification to add filters.
You specify a logging type
in your logging specification:
-
inline
when specifying logging levels directly -
external
when referencing a ConfigMap
Example inline
logging configuration
spec: # ... logging: type: inline loggers: kafka.root.logger.level: INFO
Example external
logging configuration
spec: # ... logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key
Values for the name
and key
of the ConfigMap are mandatory. Default logging is used if the name
or key
is not set.
8.13.1. Logging options for Kafka components and operators
For more information on configuring logging for specific Kafka components or operators, see the following sections.
Kafka component logging
Operator logging
8.13.2. Creating a ConfigMap for logging
To use a ConfigMap to define logging properties, you create the ConfigMap and then reference it as part of the logging definition in the spec
of a resource.
The ConfigMap must contain the appropriate logging configuration.
-
log4j.properties
for Kafka components, ZooKeeper, and the Kafka Bridge -
log4j2.properties
for the Topic Operator and User Operator
The configuration must be placed under these properties.
In this procedure a ConfigMap defines a root logger for a Kafka resource.
Procedure
Create the ConfigMap.
You can create the ConfigMap as a YAML file or from a properties file.
ConfigMap example with a root logger definition for Kafka:
kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level="INFO"
If you are using a properties file, specify the file at the command line:
oc create configmap logging-configmap --from-file=log4j.properties
The properties file defines the logging configuration:
# Define the logger kafka.root.logger.level="INFO" # ...
Define external logging in the
spec
of the resource, setting thelogging.valueFrom.configMapKeyRef.name
to the name of the ConfigMap andlogging.valueFrom.configMapKeyRef.key
to the key in this ConfigMap.spec: # ... logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties
Create or update the resource.
oc apply -f <kafka_configuration_file>
8.13.3. Configuring Cluster Operator logging
Cluster Operator logging is configured through a ConfigMap
named strimzi-cluster-operator
. A ConfigMap
containing logging configuration is created when installing the Cluster Operator. This ConfigMap
is described in the file install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
. You configure Cluster Operator logging by changing the data.log4j2.properties
values in this ConfigMap
.
To update the logging configuration, you can edit the 050-ConfigMap-strimzi-cluster-operator.yaml
file and then run the following command:
oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
Alternatively, edit the ConfigMap
directly:
oc edit configmap strimzi-cluster-operator
With this ConfigMap, you can control various aspects of logging, including the root logger level, log output format, and log levels for different components. The monitorInterval
setting, determines how often the logging configuration is reloaded. You can also control the logging levels for the Kafka AdminClient
, ZooKeeper ZKTrustManager
, Netty, and the OkHttp client. Netty is a framework used in AMQ Streams for network communication, and OkHttp is a library used for making HTTP requests.
If the ConfigMap
is missing when the Cluster Operator is deployed, the default logging values are used.
If the ConfigMap
is accidentally deleted after the Cluster Operator is deployed, the most recently loaded logging configuration is used. Create a new ConfigMap
to load a new logging configuration.
Do not remove the monitorInterval
option from the ConfigMap
.
8.13.4. Adding logging filters to AMQ Streams operators
If you are using a ConfigMap to configure the (log4j2) logging levels for AMQ Streams operators, you can also define logging filters to limit what’s returned in the log.
Logging filters are useful when you have a large number of logging messages. Suppose you set the log level for the logger as DEBUG (rootLogger.level="DEBUG"
). Logging filters reduce the number of logs returned for the logger at that level, so you can focus on a specific resource. When the filter is set, only log messages matching the filter are logged.
Filters use markers to specify what to include in the log. You specify a kind, namespace and name for the marker. For example, if a Kafka cluster is failing, you can isolate the logs by specifying the kind as Kafka
, and use the namespace and name of the failing cluster.
This example shows a marker filter for a Kafka cluster named my-kafka-cluster
.
Basic logging filter configuration
rootLogger.level="INFO" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4
You can create one or more filters. Here, the log is filtered for two Kafka clusters.
Multiple logging filter configuration
appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2)
Adding filters to the Cluster Operator
To add filters to the Cluster Operator, update its logging ConfigMap YAML file (install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
).
Procedure
Update the
050-ConfigMap-strimzi-cluster-operator.yaml
file to add the filter properties to the ConfigMap.In this example, the filter properties return logs only for the
my-kafka-cluster
Kafka cluster:kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: #... appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster)
Alternatively, edit the
ConfigMap
directly:oc edit configmap strimzi-cluster-operator
If you updated the YAML file instead of editing the
ConfigMap
directly, apply the changes by deploying the ConfigMap:oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
Adding filters to the Topic Operator or User Operator
To add filters to the Topic Operator or User Operator, create or edit a logging ConfigMap.
In this procedure a logging ConfigMap is created with filters for the Topic Operator. The same approach is used for the User Operator.
Procedure
Create the ConfigMap.
You can create the ConfigMap as a YAML file or from a properties file.
In this example, the filter properties return logs only for the
my-topic
topic:kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level="INFO" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)
If you are using a properties file, specify the file at the command line:
oc create configmap logging-configmap --from-file=log4j2.properties
The properties file defines the logging configuration:
# Define the logger rootLogger.level="INFO" # Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic) # ...
Define external logging in the
spec
of the resource, setting thelogging.valueFrom.configMapKeyRef.name
to the name of the ConfigMap andlogging.valueFrom.configMapKeyRef.key
to the key in this ConfigMap.For the Topic Operator, logging is specified in the
topicOperator
configuration of theKafka
resource.spec: # ... entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties
- Apply the changes by deploying the Cluster Operator:
create -f install/cluster-operator -n my-cluster-operator-namespace
Additional resources
8.14. Using ConfigMaps to add configuration
Add specific configuration to your AMQ Streams deployment using ConfigMap
resources. ConfigMaps use key-value pairs to store non-confidential data. Configuration data added to ConfigMaps is maintained in one place and can be reused amongst components.
ConfigMaps can only store the following types of configuration data:
- Logging configuration
- Metrics configuration
- External configuration for Kafka Connect connectors
You can’t use ConfigMaps for other areas of configuration.
When you configure a component, you can add a reference to a ConfigMap using the configMapKeyRef
property.
For example, you can use configMapKeyRef
to reference a ConfigMap that provides configuration for logging. You might use a ConfigMap to pass a Log4j configuration file. You add the reference to the logging
configuration.
Example ConfigMap for logging
spec: # ... logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key
To use a ConfigMap for metrics configuration, you add a reference to the metricsConfig
configuration of the component in the same way.
ExternalConfiguration
properties make data from a ConfigMap (or Secret) mounted to a pod available as environment variables or volumes. You can use external configuration data for the connectors used by Kafka Connect. The data might be related to an external data source, providing the values needed for the connector to communicate with that data source.
For example, you can use the configMapKeyRef
property to pass configuration data from a ConfigMap as an environment variable.
Example ConfigMap providing environment variable values
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key
If you are using ConfigMaps that are managed externally, use configuration providers to load the data in the ConfigMaps.
8.14.1. Naming custom ConfigMaps
AMQ Streams creates its own ConfigMaps and other resources when it is deployed to OpenShift. The ConfigMaps contain data necessary for running components. The ConfigMaps created by AMQ Streams must not be edited.
Make sure that any custom ConfigMaps you create do not have the same name as these default ConfigMaps. If they have the same name, they will be overwritten. For example, if your ConfigMap has the same name as the ConfigMap for the Kafka cluster, it will be overwritten when there is an update to the Kafka cluster.
Additional resources
8.15. Loading configuration values from external sources
Use configuration providers to load configuration data from external sources. The providers operate independently of AMQ Streams. You can use them to load configuration data for all Kafka components, including producers and consumers. You reference the external source in the configuration of the component and provide access rights. The provider loads data without needing to restart the Kafka component or extracting files, even when referencing a new external source. For example, use providers to supply the credentials for the Kafka Connect connector configuration. The configuration must include any access rights to the external source.
8.15.1. Enabling configuration providers
You can enable one or more configuration providers using the config.providers
properties in the spec
configuration of a component.
Example configuration to enable a configuration provider
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env config.providers.env.class: io.strimzi.kafka.EnvVarConfigProvider # ...
- KubernetesSecretConfigProvider
- Loads configuration data from OpenShift secrets. You specify the name of the secret and the key within the secret where the configuration data is stored. This provider is useful for storing sensitive configuration data like passwords or other user credentials.
- KubernetesConfigMapConfigProvider
- Loads configuration data from OpenShift config maps. You specify the name of the config map and the key within the config map where the configuration data is stored. This provider is useful for storing non-sensitive configuration data.
- EnvVarConfigProvider
- Loads configuration data from environment variables. You specify the name of the environment variable where the configuration data is stored. This provider is useful for configuring applications running in containers, for example, to load certificates or JAAS configuration from environment variables mapped from secrets.
- FileConfigProvider
- Loads configuration data from a file. You specify the path to the file where the configuration data is stored. This provider is useful for loading configuration data from files that are mounted into containers.
- DirectoryConfigProvider
- Loads configuration data from files within a directory. You specify the path to the directory where the configuration files are stored. This provider is useful for loading multiple configuration files and for organizing configuration data into separate files.
To use KubernetesSecretConfigProvider
and KubernetesConfigMapConfigProvider
, which are part of the OpenShift Configuration Provider plugin, you must set up access rights to the namespace that contains the configuration file.
You can use the other providers without setting up access rights. You can supply connector configuration for Kafka Connect or MirrorMaker 2 in this way by doing the following:
- Mount config maps or secrets into the Kafka Connect pod as environment variables or volumes
-
Enable
EnvVarConfigProvider
,FileConfigProvider
, orDirectoryConfigProvider
in the Kafka Connect or MirrorMaker 2 configuration -
Pass connector configuration using the
externalConfiguration
property in thespec
of theKafkaConnect
orKafkaMirrorMaker2
resource
Using providers help prevent the passing of restricted information through the Kafka Connect REST interface. You can use this approach in the following scenarios:
- Mounting environment variables with the values a connector uses to connect and communicate with a data source
- Mounting a properties file with values that are used to configure Kafka Connect connectors
- Mounting files in a directory that contains values for the TLS truststore and keystore used by a connector
A restart is required when using a new Secret
or ConfigMap
for a connector, which can disrupt other connectors.
Additional resources
8.15.2. Loading configuration values from secrets or config maps
Use the KubernetesSecretConfigProvider
to provide configuration properties from a secret or the KubernetesConfigMapConfigProvider
to provide configuration properties from a config map.
In this procedure, a config map provides configuration properties for a connector. The properties are specified as key values of the config map. The config map is mounted into the Kafka Connect pod as a volume.
Prerequisites
- A Kafka cluster is running.
- The Cluster Operator is running.
- You have a config map containing the connector configuration.
Example config map with connector properties
apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2
Procedure
Configure the
KafkaConnect
resource.-
Enable the
KubernetesConfigMapConfigProvider
The specification shown here can support loading values from config maps and secrets.
Example Kafka Connect configuration to use config maps and secrets
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 # ...
- 1
- The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from
config.providers
, taking the formconfig.providers.${alias}.class
. - 2
KubernetesConfigMapConfigProvider
provides values from config maps.- 3
KubernetesSecretConfigProvider
provides values from secrets.
-
Enable the
Create or update the resource to enable the provider.
oc apply -f <kafka_connect_configuration_file>
Create a role that permits access to the values in the external config map.
Example role to access values from a config map
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["my-connector-configuration"] verbs: ["get"] # ...
The rule gives the role permission to access the
my-connector-configuration
config map.Create a role binding to permit access to the namespace that contains the config map.
Example role binding to access the namespace that contains the config map
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io # ...
The role binding gives the role permission to access the
my-project
namespace.The service account must be the same one used by the Kafka Connect deployment. The service account name format is
<cluster_name>-connect
, where<cluster_name>
is the name of theKafkaConnect
custom resource.Reference the config map in the connector configuration.
Example connector configuration referencing the config map
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: ${configmaps:my-project/my-connector-configuration:option1} # ... # ...
The placeholder structure is
configmaps:<path_and_file_name>:<property>
.KubernetesConfigMapConfigProvider
reads and extracts theoption1
property value from the external config map.
8.15.3. Loading configuration values from environment variables
Use the EnvVarConfigProvider
to provide configuration properties as environment variables. Environment variables can contain values from config maps or secrets.
In this procedure, environment variables provide configuration properties for a connector to communicate with Amazon AWS. The connector must be able to read the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
. The values of the environment variables are derived from a secret mounted into the Kafka Connect pod.
The names of user-defined environment variables cannot start with KAFKA_
or STRIMZI_
.
Prerequisites
- A Kafka cluster is running.
- The Cluster Operator is running.
- You have a secret containing the connector configuration.
Example secret with values for environment variables
apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=
Procedure
Configure the
KafkaConnect
resource.-
Enable the
EnvVarConfigProvider
-
Specify the environment variables using the
externalConfiguration
property.
Example Kafka Connect configuration to use external environment variables
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env 1 config.providers.env.class: io.strimzi.kafka.EnvVarConfigProvider 2 # ... externalConfiguration: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey # ...
- 1
- The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from
config.providers
, taking the formconfig.providers.${alias}.class
. - 2
EnvVarConfigProvider
provides values from environment variables.- 3
- The environment variable takes a value from the secret.
- 4
- The name of the secret containing the environment variable.
- 5
- The name of the key stored in the secret.
NoteThe
secretKeyRef
property references keys in a secret. If you are using a config map instead of a secret, use theconfigMapKeyRef
property.-
Enable the
Create or update the resource to enable the provider.
oc apply -f <kafka_connect_configuration_file>
Reference the environment variable in the connector configuration.
Example connector configuration referencing the environment variable
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: ${env:AWS_ACCESS_KEY_ID} option: ${env:AWS_SECRET_ACCESS_KEY} # ... # ...
The placeholder structure is
env:<environment_variable_name>
.EnvVarConfigProvider
reads and extracts the environment variable values from the mounted secret.
8.15.4. Loading configuration values from a file within a directory
Use the FileConfigProvider
to provide configuration properties from a file within a directory. Files can be config maps or secrets.
In this procedure, a file provides configuration properties for a connector. A database name and password are specified as properties of a secret. The secret is mounted to the Kafka Connect pod as a volume. Volumes are mounted on the path /opt/kafka/external-configuration/<volume-name>
.
Prerequisites
- A Kafka cluster is running.
- The Cluster Operator is running.
- You have a secret containing the connector configuration.
Example secret with database properties
apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password
Procedure
Configure the
KafkaConnect
resource.-
Enable the
FileConfigProvider
-
Specify the file using the
externalConfiguration
property.
Example Kafka Connect configuration to use an external property file
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 #... externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4
- 1
- The alias for the configuration provider is used to define other configuration parameters.
- 2
FileConfigProvider
provides values from properties files. The parameter uses the alias fromconfig.providers
, taking the formconfig.providers.${alias}.class
.- 3
- The name of the volume containing the secret.
- 4
- The name of the secret.
-
Enable the
Create or update the resource to enable the provider.
oc apply -f <kafka_connect_configuration_file>
Reference the file properties in the connector configuration as placeholders.
Example connector configuration referencing the file
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: "3306" database.user: "${file:/opt/kafka/external-configuration/connector-config/mysecret:dbUsername}" database.password: "${file:/opt/kafka/external-configuration/connector-config/mysecret:dbPassword}" database.server.id: "184054" #...
The placeholder structure is
file:<path_and_file_name>:<property>
.FileConfigProvider
reads and extracts the database username and password property values from the mounted secret.
8.15.5. Loading configuration values from multiple files within a directory
Use the DirectoryConfigProvider
to provide configuration properties from multiple files within a directory. Files can be config maps or secrets.
In this procedure, a secret provides the TLS keystore and truststore user credentials for a connector. The credentials are in separate files. The secrets are mounted into the Kafka Connect pod as volumes. Volumes are mounted on the path /opt/kafka/external-configuration/<volume-name>
.
Prerequisites
- A Kafka cluster is running.
- The Cluster Operator is running.
- You have a secret containing the user credentials.
Example secret with user credentials
apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store
The my-user
secret provides the keystore credentials (user.crt
and user.key
) for the connector.
The <cluster_name>-cluster-ca-cert
secret generated when deploying the Kafka cluster provides the cluster CA certificate as truststore credentials (ca.crt
).
Procedure
Configure the
KafkaConnect
resource.-
Enable the
DirectoryConfigProvider
-
Specify the files using the
externalConfiguration
property.
Example Kafka Connect configuration to use external property files
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 #... externalConfiguration: volumes: 3 - name: cluster-ca 4 secret: secretName: my-cluster-cluster-ca-cert 5 - name: my-user secret: secretName: my-user 6
- 1
- The alias for the configuration provider is used to define other configuration parameters.
- 2
DirectoryConfigProvider
provides values from files in a directory. The parameter uses the alias fromconfig.providers
, taking the formconfig.providers.${alias}.class
.- 3
- The names of the volumes containing the secrets.
- 4
- The name of the secret for the cluster CA certificate to supply truststore configuration.
- 5
- The name of the secret for the user to supply keystore configuration.
-
Enable the
Create or update the resource to enable the provider.
oc apply -f <kafka_connect_configuration_file>
Reference the file properties in the connector configuration as placeholders.
Example connector configuration referencing the files
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: # ... database.history.producer.security.protocol: SSL database.history.producer.ssl.truststore.type: PEM database.history.producer.ssl.truststore.certificates: "${directory:/opt/kafka/external-configuration/cluster-ca:ca.crt}" database.history.producer.ssl.keystore.type: PEM database.history.producer.ssl.keystore.certificate.chain: "${directory:/opt/kafka/external-configuration/my-user:user.crt}" database.history.producer.ssl.keystore.key: "${directory:/opt/kafka/external-configuration/my-user:user.key}" #...
The placeholder structure is
directory:<path>:<file_name>
.DirectoryConfigProvider
reads and extracts the credentials from the mounted secrets.
8.16. Customizing OpenShift resources
An AMQ Streams deployment creates OpenShift resources, such as Deployment
, Pod
, and Service
resources. These resources are managed by AMQ Streams operators. Only the operator that is responsible for managing a particular OpenShift resource can change that resource. If you try to manually change an operator-managed OpenShift resource, the operator will revert your changes back.
Changing an operator-managed OpenShift resource can be useful if you want to perform certain tasks, such as the following:
-
Adding custom labels or annotations that control how
Pods
are treated by Istio or other services -
Managing how
Loadbalancer
-type Services are created by the cluster
To make the changes to an OpenShift resource, you can use the template
property within the spec
section of various AMQ Streams custom resources.
Here is a list of the custom resources where you can apply the changes:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
Kafka.spec.entityOperator
-
Kafka.spec.kafkaExporter
-
Kafka.spec.cruiseControl
-
KafkaNodePool.spec
-
KafkaConnect.spec
-
KafkaMirrorMaker.spec
-
KafkaMirrorMaker2.spec
-
KafkaBridge.spec
-
KafkaUser.spec
For more information about these properties, see the AMQ Streams Custom Resource API Reference.
The AMQ Streams Custom Resource API Reference provides more details about the customizable fields.
In the following example, the template
property is used to modify the labels in a Kafka broker’s pod.
Example template customization
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # ... template: pod: metadata: labels: mylabel: myvalue # ...
8.16.1. Customizing the image pull policy
AMQ Streams allows you to customize the image pull policy for containers in all pods deployed by the Cluster Operator. The image pull policy is configured using the environment variable STRIMZI_IMAGE_PULL_POLICY
in the Cluster Operator deployment. The STRIMZI_IMAGE_PULL_POLICY
environment variable can be set to three different values:
Always
- Container images are pulled from the registry every time the pod is started or restarted.
IfNotPresent
- Container images are pulled from the registry only when they were not pulled before.
Never
- Container images are never pulled from the registry.
Currently, the image pull policy can only be customized for all Kafka, Kafka Connect, and Kafka MirrorMaker clusters at once. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters.
Additional resources
8.16.2. Applying a termination grace period
Apply a termination grace period to give a Kafka cluster enough time to shut down cleanly.
Specify the time using the terminationGracePeriodSeconds
property. Add the property to the template.pod
configuration of the Kafka
custom resource.
The time you add will depend on the size of your Kafka cluster. The OpenShift default for the termination grace period is 30 seconds. If you observe that your clusters are not shutting down cleanly, you can increase the termination grace period.
A termination grace period is applied every time a pod is restarted. The period begins when OpenShift sends a term (termination) signal to the processes running in the pod. The period should reflect the amount of time required to transfer the processes of the terminating pod to another pod before they are stopped. After the period ends, a kill signal stops any processes still running in the pod.
The following example adds a termination grace period of 120 seconds to the Kafka
custom resource. You can also specify the configuration in the custom resources of other Kafka components.
Example termination grace period configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... template: pod: terminationGracePeriodSeconds: 120 # ... # ...
Chapter 9. Using the Topic Operator to manage Kafka topics
The KafkaTopic
resource configures topics, including partition and replication factor settings. When you create, modify, or delete a topic using KafkaTopic
, the Topic Operator ensures that these changes are reflected in the Kafka cluster.
For more information on the KafkaTopic
resource, see the KafkaTopic
schema reference.
9.1. Topic management modes
The KafkaTopic
resource is responsible for managing a single topic within a Kafka cluster. The Topic Operator provides two modes for managing KafkaTopic
resources and Kafka topics:
- Bidirectional mode
- Bidirectional mode requires ZooKeeper for cluster management. It is not compatible with using AMQ Streams in KRaft mode.
- (Preview) Unidirectional mode
- Unidirectional mode does not require ZooKeeper for cluster management. It is compatible with using AMQ Streams in KRaft mode.
Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator
feature gate to be able to use it.
9.1.1. Bidirectional topic management
In bidirectional mode, the Topic Operator operates as follows:
-
When a
KafkaTopic
is created, deleted, or changed, the Topic Operator performs the corresponding operation on the Kafka topic. -
Similarly, when a topic is created, deleted, or changed within the Kafka cluster, the Topic Operator performs the corresponding operation on the
KafkaTopic
resource.
Try to stick to one method of managing topics, either through the KafkaTopic
resources or directly in Kafka. Avoid routinely switching between both methods for a given topic.
9.1.2. (Preview) Unidirectional topic management
In unidirectional mode, the Topic Operator operates as follows:
-
When a
KafkaTopic
is created, deleted, or changed, the Topic Operator performs the corresponding operation on the Kafka topic.
If a topic is created, deleted, or modified directly within the Kafka cluster, without the presence of a corresponding KafkaTopic
resource, the Topic Operator does not manage that topic. The Topic Operator will only manage Kafka topics associated with KafkaTopic
resources and does not interfere with topics managed independently within the Kafka cluster. If a KafkaTopic
does exist for a Kafka topic, any configuration changes made outside the resource are reverted.
9.2. Topic naming conventions
A KafkaTopic
resource includes a name for the topic and a label that identifies the name of the Kafka cluster it belongs to.
Label identifying a Kafka cluster for topic handling
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 labels: strimzi.io/cluster: my-cluster spec: topicName: topic-name-1
The label provides the cluster name of the Kafka
resource. The Topic Operator uses the label as a mechanism for determining which KafkaTopic
resources to manage. If the label does not match the Kafka cluster, the Topic Operator cannot see the KafkaTopic
, and the topic is not created.
Kafka and OpenShift have their own naming validation rules, and a Kafka topic name might not be a valid resource name in OpenShift. If possible, try and stick to a naming convention that works for both.
Consider the following guidelines:
- Use topic names that reflect the nature of the topic
- Be concise and keep the name under 63 characters
- Use all lower case and hyphens
- Avoid special characters, spaces or symbols
The KafkaTopic
resource allows you to specify the Kafka topic name using the metadata.name
field. However, if the desired Kafka topic name is not a valid OpenShift resource name, you can use the spec.topicName
property to specify the actual name. The spec.topicName
field is optional, and when it’s absent, the Kafka topic name defaults to the metadata.name
of the topic. When a topic is created, the topic name cannot be changed later.
Example of supplying a valid Kafka topic name
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 1 spec: topicName: My.Topic.1 2 # ...
If more than one KafkaTopic
resource refers to the same Kafka topic, the resource that was created first is considered to be the one managing the topic. The status of the newer resources is updated to indicate a conflict, and their Ready
status is changed to False
.
If a Kafka client application, such as Kafka Streams, automatically creates topics with invalid OpenShift resource names, the Topic Operator generates a valid metadata.name
when used in bidirectional mode. It replaces invalid characters and appends a hash to the name. However, this behavior does not apply in (preview) unidirectional mode.
Example of replacing an invalid topic name
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic---c55e57fe2546a33f9e603caf57165db4072e827e # ...
For more information on the requirements for identifiers and names in a cluster, refer to the OpenShift documentation Object Names and IDs.
9.3. Handling changes to topics
How the Topic Operator handles changes to topics depends on the mode of topic management.
-
For bidirectional topic management, configuration changes are synchronized between the Kafka topic and the
KafkaTopic
resource in both directions. Incompatible changes prioritize the Kafka configuration, and theKafkaTopic
resource is adjusted accordingly. -
For unidirectional topic management (currently in preview), configuration changes only go in one direction: from the
KafkaTopic
resource to the Kafka topic. Any changes to a Kafka topic managed outside theKafkaTopic
resource are reverted.
9.3.1. Topic store for bidirectional topic management
For bidirectional topic management, the Topic Operator is capable of handling changes to topics when there is no single source of truth. The KafkaTopic
resource and the Kafka topic can undergo independent modifications, where real-time observation of changes may not always be feasible, particularly when the Topic Operator is not operational. To handle this, the Topic Operator maintains a topic store that stores topic configuration information about each topic. It compares the state of the Kafka cluster and OpenShift with the topic store to determine the necessary changes for synchronization. This evaluation takes place during startup and at regular intervals while the Topic Operator is active.
For example, if the Topic Operator is inactive, and a new KafkaTopic
named my-topic is created, upon restart, the Topic Operator recognizes the absence of my-topic in the topic store. It recognizes that the KafkaTopic
was created after its last operation. Consequently, the Topic Operator generates the corresponding Kafka topic and saves the metadata in the topic store.
The topic store enables the Topic Operator to manage situations where the topic configuration is altered in both Kafka topics and KafkaTopic
resources, as long as the changes are compatible. When Kafka topic configuration is updated or changes are made to the KafkaTopic
custom resource, the topic store is updated after reconciling with the Kafka cluster, as long as the changes are compatible.
The topic store is based on the Kafka Streams key-value mechanism, which uses Kafka topics to persist the state. Topic metadata is cached in-memory and accessed locally within the Topic Operator. Updates from operations applied to the local in-memory cache are persisted to a backup topic store on disk. The topic store is continually synchronized with updates from Kafka topics or OpenShift KafkaTopic
custom resources. Operations are handled rapidly with the topic store set up this way, but should the in-memory cache crash it is automatically repopulated from the persistent storage.
Internal topics support the handling of topic metadata in the topic store.
__strimzi_store_topic
- Input topic for storing the topic metadata
__strimzi-topic-operator-kstreams-topic-store-changelog
- Retains a log of compacted topic store values
Do not delete these topics, as they are essential to the running of the Topic Operator.
9.3.2. Migrating topic metadata from ZooKeeper to the topic store
In previous releases of AMQ Streams, topic metadata was stored in ZooKeeper. The topic store removes this requirement, bringing the metadata into the Kafka cluster, and under the control of the Topic Operator.
When upgrading to AMQ Streams 2.5, the transition to Topic Operator control of the topic store is seamless. Metadata is found and migrated from ZooKeeper, and the old store is deleted.
9.3.3. Downgrading to an AMQ Streams version that uses ZooKeeper to store topic metadata
If you are reverting back to a version of AMQ Streams earlier than 1.7, which uses ZooKeeper for the storage of topic metadata, you still downgrade your Cluster Operator to the previous version, then downgrade Kafka brokers and client applications to the previous Kafka version as standard.
However, you must also delete the topics that were created for the topic store using a kafka-topics
command, specifying the bootstrap address of the Kafka cluster. For example:
oc run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.1 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete
The command must correspond to the type of listener and authentication used to access the Kafka cluster.
The Topic Operator will reconstruct the ZooKeeper topic metadata from the state of the topics in Kafka.
9.3.4. Automatic creation of topics
Applications can trigger the automatic creation of topics in the Kafka cluster. By default, the Kafka broker configuration auto.create.topics.enable
is set to true
, allowing the broker to create topics automatically when an application attempts to produce or consume from a non-existing topic. Applications might also use the Kafka AdminClient
to automatically create topics. When an application is deployed along with its KafkaTopic
resources, it is possible that automatic topic creation in the cluster happens before the Topic Operator can react to the KafkaTopic
.
For bidirectional topic management, the Topic Operator synchronizes the changes between the topics and KafkaTopic
resources.
If you are trying the unidirectional topic management preview, this can mean that the topics created for an application deployment are initially created with default topic configuration. If the Topic Operator attempts to reconfigure the topics based on KafkaTopic
resource specifications included with the application deployment, the operation might fail because the required change to the configuration is not allowed. For example, if the change means lowering the number of topic partitions. For this reason, it is recommended to disable auto.create.topics.enable
in the Kafka cluster configuration when using unidirectional topic management.
9.4. Configuring Kafka topics
Use the properties of the KafkaTopic
resource to configure Kafka topics. Changes made to topic configuration in the KafkaTopic
are propagated to Kafka.
You can use oc apply
to create or modify topics, and oc delete
to delete existing topics.
For example:
-
oc apply -f <topic_config_file>
-
oc delete KafkaTopic <topic_name>
To be able to delete topics, delete.topic.enable
must be set to true
(default) in the spec.kafka.config
of the Kafka resource.
This procedure shows how to create a topic with 10 partitions and 2 replicas.
The procedure is the same for the bidirectional and (preview) unidirectional modes of topic management.
Before you begin
The KafkaTopic resource does not allow the following changes:
-
Renaming the topic defined in
spec.topicName
. A mismatch betweenspec.topicName
andstatus.topicName
will be detected. -
Decreasing the number of partitions using
spec.partitions
(not supported by Kafka). -
Modifying the number of replicas specified in
spec.replicas
.
Increasing spec.partitions
for topics with keys will alter the partitioning of records, which can cause issues, especially when the topic uses semantic partitioning.
Prerequisites
- A running Kafka cluster configured with a Kafka broker listener using mTLS authentication and TLS encryption.
- A running Topic Operator (typically deployed with the Entity Operator).
-
For deleting a topic,
delete.topic.enable=true
(default) in thespec.kafka.config
of theKafka
resource.
Procedure
Configure the
KafkaTopic
resource.Example Kafka topic configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2
TipWhen modifying a topic, you can get the current version of the resource using
oc get kafkatopic my-topic-1 -o yaml
.Create the
KafkaTopic
resource in OpenShift.oc apply -f <topic_config_file>
Wait for the ready status of the topic to change to
True
:oc get kafkatopics -o wide -w -n <namespace>
Kafka topic status
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 my-topic-3 my-cluster 10 3 True
Topic creation is successful when the
READY
output showsTrue
.If the
READY
column stays blank, get more details on the status from the resource YAML or from the Topic Operator logs.Status messages provide details on the reason for the current status.
oc get kafkatopics my-topic-2 -o yaml
Details on a topic with a
NotReady
status# ... status: conditions: - lastTransitionTime: "2022-06-13T10:14:43.351550Z" message: Number of partitions cannot be decreased reason: PartitionDecreaseException status: "True" type: NotReady
In this example, the reason the topic is not ready is because the original number of partitions was reduced in the
KafkaTopic
configuration. Kafka does not support this.After resetting the topic configuration, the status shows the topic is ready.
oc get kafkatopics my-topic-2 -o wide -w -n <namespace>
Status update of the topic
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-2 my-cluster 10 3 True
Fetching the details shows no messages
oc get kafkatopics my-topic-2 -o yaml
Details on a topic with a
READY
status# ... status: conditions: - lastTransitionTime: '2022-06-13T10:15:03.761084Z' status: 'True' type: Ready
9.5. Configuring topics for replication and number of partitions
The recommended configuration for topics managed by the Topic Operator is a topic replication factor of 3, and a minimum of 2 in-sync replicas.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 1 replicas: 3 2 config: min.insync.replicas: 2 3 #...
- 1
- The number of partitions for the topic.
- 2
- The number of replica topic partitions. Currently, this cannot be changed in the
KafkaTopic
resource, but it can be changed using thekafka-reassign-partitions.sh
tool. - 3
- The minimum number of replica partitions that a message must be successfully written to, or an exception is raised.
In-sync replicas are used in conjunction with the acks
configuration for producer applications. The acks
configuration determines the number of follower partitions a message must be replicated to before the message is acknowledged as successfully received. The bidirectional Topic Operator runs with acks=all
for its internal topics whereby messages must be acknowledged by all in-sync replicas.
When scaling Kafka clusters by adding or removing brokers, replication factor configuration is not changed and replicas are not reassigned automatically. However, you can use the kafka-reassign-partitions.sh
tool to change the replication factor, and manually reassign replicas to brokers.
Alternatively, though the integration of Cruise Control for AMQ Streams cannot change the replication factor for topics, the optimization proposals it generates for rebalancing Kafka include commands that transfer partition replicas and change partition leadership.
9.6. (Preview) Managing KafkaTopic resources without impacting Kafka topics
This procedure describes how to convert Kafka topics that are currently managed through the KafkaTopic
resource into non-managed topics. This capability can be useful in various scenarios. For instance, you might want to update the metadata.name
of a KafkaTopic
resource. You can only do that by deleting the original KafkaTopic
resource and recreating a new one.
By annotating a KafkaTopic
resource with strimzi.io/managed=false
, you indicate that the Topic Operator should no longer manage that particular topic. This allows you to retain the Kafka topic while making changes to the resource’s configuration or other administrative tasks.
You can perform this task if you are using unidirectional topic management.
Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator
feature gate to be able to use it.
Prerequisites
Procedure
Annotate the
KafkaTopic
resource in OpenShift, settingstrimzi.io/managed
tofalse
:oc annotate kafkatopic my-topic-1 strimzi.io/managed=false
Specify the
metadata.name
of the topic in yourKafkaTopic
resource, which ismy-topic-1
in this example.Check the status of the
KafkaTopic
resource to make sure the request was successful:oc get kafkatopics my-topic-1 -o yaml
Example topic with a
Ready
statusapiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 124 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 # ... status: observedGeneration: 124 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z
- 1
- Successful reconciliation of the resource means the topic is no longer managed.
The value of
metadata.generation
(the current version of the deployment) mustmatch status.observedGeneration
(the latest reconciliation of the resource).You can now make changes to the
KafkaTopic
resource without it affecting the Kafka topic it was managing.For example, to change the
metadata.name
, do as follows:Delete the original
KafkTopic
resource:oc delete kafkatopic <kafka_topic_name>
-
Recreate the
KafkTopic
resource with a differentmetadata.name
, but usespec.topicName
to refer to the same topic that was managed by the original
-
If you haven’t deleted the original
KafkaTopic
resource, and you wish to resume management of the Kafka topic again, set thestrimzi.io/managed
annotation totrue
or remove the annotation.
9.7. (Preview) Enabling topic management for existing Kafka topics
This procedure describes how to enable topic management for topics that are not currently managed through the KafkaTopic
resource. You do this by creating a matching KafkaTopic
resource.
You can perform this task if you are using unidirectional topic management.
Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator
feature gate to be able to use it.
Prerequisites
Procedure
Create a
KafkaTopic
resource with ametadata.name
that is the same as the Kafka topic.Or use
spec.topicName
if the name of the topic in Kafka would not be a legal OpenShift resource name.Example Kafka topic configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2
In this example, the Kafka topic is named
my-topic-1
.The Topic Operator checks whether the topic is managed by another
KafkaTopic
resource. If it is, the older resource takes precedence and a resource conflict error is returned in the status of the new resource.Apply the
KafkaTopic
resource:oc apply -f <topic_configuration_file>
Wait for the operator to update the topic in Kafka.
The operator updates the Kafka topic with the
spec
of theKafkaTopic
that has the same name.Check the status of the
KafkaTopic
resource to make sure the request was successful:oc get kafkatopics my-topic-1 -o yaml
Example topic with a
Ready
statusapiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 # ... status: observedGeneration: 1 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z
- 1
- Successful reconciliation of the resource means the topic is now managed.
The value of
metadata.generation
(the current version of the deployment) mustmatch status.observedGeneration
(the latest reconciliation of the resource).
9.8. (Preview) Deleting managed topics
Unidirectional topic management supports the deletion of topics managed through the KafkaTopic
resource with or without OpenShift finalizers. This is controlled by the STRIMZI_USE_FINALIZERS
Topic Operator environment variable. By default, this is set to true
, though it can be set to false
in the Topic Operator env
configuration if you do not want the Topic Operator to add finalizers.
Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator
feature gate to be able to use it.
Finalizers ensure orderly and controlled deletion of KafkaTopic
resources. A finalizer for the Topic Operator is added to the metadata of the KafkaTopic
resource:
Finalizer to control topic deletion
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster
In this example, the finalizer is added for topic my-topic-1
. The finalizer prevents the topic from being fully deleted until the finalization process is complete. If you then delete the topic using oc delete kafkatopic my-topic-1
, a timestamp is added to the metadata:
Finalizer timestamp on deletion
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster deletionTimestamp: 20230301T000000.000
The resource is still present. If the deletion fails, it is shown in the status of the resource.
When the finalization tasks are successfully executed, the finalizer is removed from the metadata, and the resource is fully deleted.
Finalizers also prevent related resources from being deleted. If the unidirectional Topic Operator is not running, it won’t be able to remove the metadata.finalizer
. Consequently, an attempt to delete the namespace that contains the KafkaTopic
resource won’t complete until either the operator is restarted, or the finalizer is otherwise removed (for example using oc edit
).
Chapter 10. Using the User Operator to manage Kafka users
When you create, modify or delete a user using the KafkaUser
resource, the User Operator ensures that these changes are reflected in the Kafka cluster.
For more information on the KafkaUser
resource, see the KafkaUser
schema reference.
10.1. Configuring Kafka users
Use the properties of the KafkaUser
resource to configure Kafka users.
You can use oc apply
to create or modify users, and oc delete
to delete existing users.
For example:
-
oc apply -f <user_config_file>
-
oc delete KafkaUser <user_name>
Users represent Kafka clients. When you configure Kafka users, you enable the user authentication and authorization mechanisms required by clients to access Kafka. The mechanism used must match the equivalent Kafka
configuration. For more information on using Kafka
and KafkaUser
resources to secure access to Kafka brokers, see Securing access to Kafka brokers.
Prerequisites
- A running Kafka cluster configured with a Kafka broker listener using mTLS authentication and TLS encryption.
- A running User Operator (typically deployed with the Entity Operator).
Procedure
Configure the
KafkaUser
resource.This example specifies mTLS authentication and simple authorization using ACLs.
Example Kafka user configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user-1 labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls authorization: type: simple acls: # Example consumer Acls for topic my-topic using consumer group my-group - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read host: "*" - resource: type: group name: my-group patternType: literal operations: - Read host: "*" # Example Producer Acls for topic my-topic - resource: type: topic name: my-topic patternType: literal operations: - Create - Describe - Write host: "*"
Create the
KafkaUser
resource in OpenShift.oc apply -f <user_config_file>
Wait for the ready status of the user to change to
True
:oc get kafkausers -o wide -w -n <namespace>
Kafka user status
NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple my-user-3 my-cluster tls simple True
User creation is successful when the
READY
output showsTrue
.If the
READY
column stays blank, get more details on the status from the resource YAML or User Operator logs.Messages provide details on the reason for the current status.
oc get kafkausers my-user-2 -o yaml
Details on a user with a
NotReady
status# ... status: conditions: - lastTransitionTime: "2022-06-10T10:07:37.238065Z" message: Simple authorization ACL rules are configured but not supported in the Kafka cluster configuration. reason: InvalidResourceException status: "True" type: NotReady
In this example, the reason the user is not ready is because simple authorization is not enabled in the
Kafka
configuration.Kafka configuration for simple authorization
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... authorization: type: simple
After updating the Kafka configuration, the status shows the user is ready.
oc get kafkausers my-user-2 -o wide -w -n <namespace>
Status update of the user
NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-2 my-cluster tls simple True
Fetching the details shows no messages.
oc get kafkausers my-user-2 -o yaml
Details on a user with a
READY
status# ... status: conditions: - lastTransitionTime: "2022-06-10T10:33:40.166846Z" status: "True" type: Ready
Chapter 11. Validating schemas with the Red Hat build of Apicurio Registry
You can use the Red Hat build of Apicurio Registry with AMQ Streams.
Apicurio Registry is a datastore for sharing standard event schemas and API designs across API and event-driven architectures. You can use Apicurio Registry to decouple the structure of your data from your client applications, and to share and manage your data types and API descriptions at runtime using a REST interface.
Apicurio Registry stores schemas used to serialize and deserialize messages, which can then be referenced from your client applications to ensure that the messages that they send and receive are compatible with those schemas. Apicurio Registry provides Kafka client serializers/deserializers for Kafka producer and consumer applications. Kafka producer applications use serializers to encode messages that conform to specific event schemas. Kafka consumer applications use deserializers, which validate that the messages have been serialized using the correct schema, based on a specific schema ID.
You can enable your applications to use a schema from the registry. This ensures consistent schema usage and helps to prevent data errors at runtime.
Additional resources
- Red Hat build of Apicurio Registry product documentation
- Red Hat build of Apicurio Registry is built on the Apicurio Registry open source community project available on GitHub: Apicurio/apicurio-registry
Chapter 12. Integrating with the Red Hat build of Debezium for change data capture
The Red Hat build of Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate the Red Hat build of Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.
Debezium has multiple uses, including:
- Data replication
- Updating caches and search indexes
- Simplifying monolithic applications
- Data integration
- Enabling streaming queries
To capture database changes, deploy Kafka Connect with a Debezium database connector. You configure a KafkaConnector
resource to define the connector instance.
For more information on deploying the Red Hat build of Debezium with AMQ Streams, refer to the product documentation. The documentation includes a Getting Started with Debezium guide that guides you through the process of setting up the services and connector required to view change event records for database updates.
Chapter 13. Setting up client access to a Kafka cluster
After you have deployed AMQ Streams, you can set up client access to your Kafka cluster. To verify the deployment, you can deploy example producer and consumer clients. Otherwise, create listeners that provide client access within or outside the OpenShift cluster.
13.1. Deploying example clients
Deploy example producer and consumer clients to send and receive messages. You can use these clients to verify a deployment of AMQ Streams.
Prerequisites
- The Kafka cluster is available for the clients.
Procedure
Deploy a Kafka producer.
oc run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.1 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server cluster-name-kafka-bootstrap:9092 --topic my-topic
- Type a message into the console where the producer is running.
- Press Enter to send the message.
Deploy a Kafka consumer.
oc run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.1 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server cluster-name-kafka-bootstrap:9092 --topic my-topic --from-beginning
- Confirm that you see the incoming messages in the consumer console.
13.2. Configuring listeners to connect to Kafka brokers
Use listeners for client connection to Kafka brokers. AMQ Streams provides a generic GenericKafkaListener
schema with properties to configure listeners through the Kafka
resource. The GenericKafkaListener
provides a flexible approach to listener configuration. You can specify properties to configure internal listeners for connecting within the OpenShift cluster or external listeners for connecting outside the OpenShift cluster.
Specify a connection type
to expose Kafka in the listener configuration. The type chosen depends on your requirements, and your environment and infrastructure. The following listener types are supported:
- Internal listeners
-
internal
to connect within the same OpenShift cluster -
cluster-ip
to expose Kafka using per-brokerClusterIP
services
-
- External listeners
-
nodeport
to use ports on OpenShift nodes -
loadbalancer
to use loadbalancer services -
ingress
to use KubernetesIngress
and the Ingress NGINX Controller for Kubernetes (Kubernetes only) -
route
to use OpenShiftRoute
and the default HAProxy router (OpenShift only)
-
Do not use ingress
on OpenShift, use the route
type instead. The Ingress NGINX Controller is only intended for use on Kubernetes. The route
type is only supported on OpenShift.
An internal
type listener configuration uses a headless service and the DNS names given to the broker pods. You might want to join your OpenShift network to an outside network. In which case, you can configure an internal
type listener (using the useServiceDnsDomain
property) so that the OpenShift service DNS domain (typically .cluster.local
) is not used. You can also configure a cluster-ip
type of listener that exposes a Kafka cluster based on per-broker ClusterIP
services. This is a useful option when you can’t route through the headless service or you wish to incorporate a custom access mechanism. For example, you might use this listener when building your own type of external listener for a specific Ingress controller or the OpenShift Gateway API.
External listeners handle access to a Kafka cluster from networks that require different authentication mechanisms. You can configure external listeners for client access outside an OpenShift environment using a specified connection mechanism, such as a loadbalancer or route. For example, loadbalancers might not be suitable for certain infrastructure, such as bare metal, where node ports provide a better option.
Each listener is defined as an array in the Kafka
resource.
Example listener configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key # ...
You can configure as many listeners as required, as long as their names and ports are unique. You can also configure listeners for secure connection using authentication.
If you want to know more about the pros and cons of each connection type, refer to Accessing Apache Kafka in Strimzi.
If you scale your Kafka cluster while using external listeners, it might trigger a rolling update of all Kafka brokers. This depends on the configuration.
Additional resources
13.3. Setting up client access to a Kafka cluster using listeners
Using the address of the Kafka cluster, you can provide access to a client in the same OpenShift cluster; or provide external access to a client on a different OpenShift namespace or outside OpenShift entirely. This procedure shows how to configure client access to a Kafka cluster from outside OpenShift or from another OpenShift cluster.
A Kafka listener provides access to the Kafka cluster. Client access is secured using the following configuration:
-
An external listener is configured for the Kafka cluster, with TLS encryption and mTLS authentication, and Kafka
simple
authorization enabled. -
A
KafkaUser
is created for the client, with mTLS authentication, and Access Control Lists (ACLs) defined forsimple
authorization.
You can configure your listener to use mutual tls
, scram-sha-512
, or oauth
authentication. mTLS always uses encryption, but encryption is also recommended when using SCRAM-SHA-512 and OAuth 2.0 authentication.
You can configure simple
, oauth
, opa
, or custom
authorization for Kafka brokers. When enabled, authorization is applied to all enabled listeners.
When you configure the KafkaUser
authentication and authorization mechanisms, ensure they match the equivalent Kafka configuration:
-
KafkaUser.spec.authentication
matchesKafka.spec.kafka.listeners[*].authentication
-
KafkaUser.spec.authorization
matchesKafka.spec.kafka.authorization
You should have at least one listener supporting the authentication you want to use for the KafkaUser
.
Authentication between Kafka users and Kafka brokers depends on the authentication settings for each. For example, it is not possible to authenticate a user with mTLS if it is not also enabled in the Kafka configuration.
AMQ Streams operators automate the configuration process and create the certificates required for authentication:
- The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication with the Kafka cluster.
- The User Operator creates the user representing the client and the security credentials used for client authentication, based on the chosen authentication type.
You add the certificates to your client configuration.
In this procedure, the CA certificates generated by the Cluster Operator are used, but you can replace them by installing your own certificates. You can also configure your listener to use a Kafka listener certificate managed by an external CA (certificate authority).
Certificates are available in PEM (.crt) and PKCS #12 (.p12) formats. This procedure uses PEM certificates. Use PEM certificates with clients that use certificates in X.509 format.
For internal clients in the same OpenShift cluster and namespace, you can mount the cluster CA certificate in the pod specification. For more information, see Configuring internal clients to trust the cluster CA.
Prerequisites
- The Kafka cluster is available for connection by a client running outside the OpenShift cluster
- The Cluster Operator and User Operator are running in the cluster
Procedure
Configure the Kafka cluster with a Kafka listener.
- Define the authentication required to access the Kafka broker through the listener.
Enable authorization on the Kafka broker.
Example listener configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... listeners: 1 - name: external 2 port: 9094 3 type: <listener_type> 4 tls: true 5 authentication: type: tls 6 configuration: 7 #... authorization: 8 type: simple superUsers: - super-user-name 9 # ...
- 1
- Configuration options for enabling external listeners are described in the Generic Kafka listener schema reference.
- 2
- Name to identify the listener. Must be unique within the Kafka cluster.
- 3
- Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients.
- 4
- External listener type specified as
route
(OpenShift only),loadbalancer
,nodeport
oringress
(Kubernetes only). An internal listener is specified asinternal
orcluster-ip
. - 5
- Required. TLS encryption on the listener. For
route
andingress
type listeners it must be set totrue
. For mTLS authentication, also use theauthentication
property. - 6
- Client authentication mechanism on the listener. For server and client authentication using mTLS, you specify
tls: true
andauthentication.type: tls
. - 7
- (Optional) Depending on the requirements of the listener type, you can specify additional listener configuration.
- 8
- Authorization specified as
simple
, which uses theAclAuthorizer
Kafka plugin. - 9
- (Optional) Super users can access all brokers regardless of any access restrictions defined in ACLs.
WarningAn OpenShift Route address comprises the name of the Kafka cluster, the name of the listener, and the name of the namespace it is created in. For example,
my-cluster-kafka-listener1-bootstrap-myproject
(CLUSTER-NAME-kafka-LISTENER-NAME-bootstrap-NAMESPACE). If you are using aroute
listener type, be careful that the whole length of the address does not exceed a maximum limit of 63 characters.
Create or update the
Kafka
resource.oc apply -f <kafka_configuration_file>
The Kafka cluster is configured with a Kafka broker listener using mTLS authentication.
A service is created for each Kafka broker pod.
A service is created to serve as the bootstrap address for connection to the Kafka cluster.
A service is also created as the external bootstrap address for external connection to the Kafka cluster using
nodeport
listeners.The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret
<cluster_name>-cluster-ca-cert
.NoteIf you scale your Kafka cluster while using external listeners, it might trigger a rolling update of all Kafka brokers. This depends on the configuration.
Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the
Kafka
resource.oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name=="<listener_name>")].bootstrapServers}{"\n"}'
For example:
oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}'
Use the bootstrap address in your Kafka client to connect to the Kafka cluster.
Create or modify a user representing the client that requires access to the Kafka cluster.
-
Specify the same authentication type as the
Kafka
listener. Specify the authorization ACLs for
simple
authorization.Example user configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster 1 spec: authentication: type: tls 2 authorization: type: simple acls: 3 - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read
-
Specify the same authentication type as the
Create or modify the
KafkaUser
resource.oc apply -f USER-CONFIG-FILE
The user is created, as well as a secret with the same name as the
KafkaUser
resource. The secret contains a public and private key for mTLS authentication.Example secret
apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store
Extract the cluster CA certificate from the
<cluster_name>-cluster-ca-cert
secret of the Kafka cluster.oc get secret <cluster_name>-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Extract the user CA certificate from the
<user_name>
secret.oc get secret <user_name> -o jsonpath='{.data.user\.crt}' | base64 -d > user.crt
Extract the private key of the user from the
<user_name>
secret.oc get secret <user_name> -o jsonpath='{.data.user\.key}' | base64 -d > user.key
Configure your client with the bootstrap address hostname and port for connecting to the Kafka cluster:
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "<hostname>:<port>");
Configure your client with the truststore credentials to verify the identity of the Kafka cluster.
Specify the public cluster CA certificate.
Example truststore configuration
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); props.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, "PEM"); props.put(SslConfigs.SSL_TRUSTSTORE_CERTIFICATES_CONFIG, "<ca.crt_file_content>");
SSL is the specified security protocol for mTLS authentication. Specify
SASL_SSL
for SCRAM-SHA-512 authentication over TLS. PEM is the file format of the truststore.Configure your client with the keystore credentials to verify the user when connecting to the Kafka cluster.
Specify the public certificate and private key.
Example keystore configuration
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); props.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, "PEM"); props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, "<user.crt_file_content>"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, "<user.key_file_content>");
Add the keystore certificate and the private key directly to the configuration. Add as a single-line format. Between the
BEGIN CERTIFICATE
andEND CERTIFICATE
delimiters, start with a newline character (\n
). End each line from the original certificate with\n
too.Example keystore configuration
props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, "-----BEGIN CERTIFICATE----- \n<user_certificate_content_line_1>\n<user_certificate_content_line_n>\n-----END CERTIFICATE---"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, "----BEGIN PRIVATE KEY-----\n<user_key_content_line_1>\n<user_key_content_line_n>\n-----END PRIVATE KEY-----");
Additional resources
- Section 14.1.1, “Listener authentication”
- Section 14.1.2, “Kafka authorization”
If you are using an authorization server, you can use token-based authentication and authorization:
13.4. Accessing Kafka using node ports
Use node ports to access an AMQ Streams Kafka cluster from an external client outside the OpenShift cluster.
To connect to a broker, you specify a hostname and port number for the Kafka bootstrap address, as well as the certificate used for TLS encryption.
The procedure shows basic nodeport
listener configuration. You can use listener properties to enable TLS encryption (tls
) and specify a client authentication mechanism (authentication
). Add additional configuration using configuration
properties. For example, you can use the following configuration properties with nodeport
listeners:
preferredNodePortAddressType
- Specifies the first address type that’s checked as the node address.
externalTrafficPolicy
- Specifies whether the service routes external traffic to node-local or cluster-wide endpoints.
nodePort
- Overrides the assigned node port numbers for the bootstrap and broker services.
For more information on listener configuration, see the GenericKafkaListener
schema reference.
Prerequisites
- A running Cluster Operator
In this procedure, the Kafka cluster name is my-cluster
. The name of the listener is external
.
Procedure
Configure a
Kafka
resource with an external listener set to thenodeport
type.For example:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: external port: 9094 type: nodeport tls: true authentication: type: tls # ... # ... zookeeper: # ...
Create or update the resource.
oc apply -f <kafka_configuration_file>
A cluster CA certificate to verify the identity of the kafka brokers is created in the secret
my-cluster-cluster-ca-cert
.NodePort
type services are created for each Kafka broker, as well as an external bootstrap service.Node port services created for the bootstrap and brokers
NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external-0 NodePort 172.30.55.13 9094:31789/TCP my-cluster-kafka-external-1 NodePort 172.30.250.248 9094:30028/TCP my-cluster-kafka-external-2 NodePort 172.30.115.81 9094:32650/TCP my-cluster-kafka-external-bootstrap NodePort 172.30.30.23 9094:32650/TCP
The bootstrap address used for client connection is propagated to the
status
of theKafka
resource.Example status for the bootstrap address
status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready listeners: # ... - addresses: - host: ip-10-0-224-199.us-west-2.compute.internal port: 32650 bootstrapServers: 'ip-10-0-224-199.us-west-2.compute.internal:32650' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external type: external observedGeneration: 2 # ...
Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the
Kafka
resource.oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}' ip-10-0-224-199.us-west-2.compute.internal:32650
Extract the cluster CA certificate.
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Configure your client to connect to the brokers.
-
Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example,
ip-10-0-224-199.us-west-2.compute.internal:32650
. Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection.
If you enabled a client authentication mechanism, you will also need to configure it in your client.
-
Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example,
If you are using your own listener certificates, check whether you need to add the CA certificate to the client’s truststore configuration. If it is a public (external) CA, you usually won’t need to add it.
13.5. Accessing Kafka using loadbalancers
Use loadbalancers to access an AMQ Streams Kafka cluster from an external client outside the OpenShift cluster.
To connect to a broker, you specify a hostname and port number for the Kafka bootstrap address, as well as the certificate used for TLS encryption.
The procedure shows basic loadbalancer
listener configuration. You can use listener properties to enable TLS encryption (tls
) and specify a client authentication mechanism (authentication
). Add additional configuration using configuration
properties. For example, you can use the following configuration properties with loadbalancer
listeners:
loadBalancerSourceRanges
- Restricts traffic to a specified list of CIDR (Classless Inter-Domain Routing) ranges.
externalTrafficPolicy
- Specifies whether the service routes external traffic to node-local or cluster-wide endpoints.
loadBalancerIP
- Requests a specific IP address when creating a loadbalancer.
For more information on listener configuration, see the GenericKafkaListener
schema reference.
Prerequisites
- A running Cluster Operator
In this procedure, the Kafka cluster name is my-cluster
. The name of the listener is external
.
Procedure
Configure a
Kafka
resource with an external listener set to theloadbalancer
type.For example:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: external port: 9095 type: loadbalancer tls: true authentication: type: tls # ... # ... zookeeper: # ...
Create or update the resource.
oc apply -f <kafka_configuration_file>
A cluster CA certificate to verify the identity of the kafka brokers is also created in the secret
my-cluster-cluster-ca-cert
.loadbalancer
type services and loadbalancers are created for each Kafka broker, as well as an external bootstrap service.Loadbalancer services and loadbalancers created for the bootstraps and brokers
NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external-0 LoadBalancer 172.30.204.234 9095:30011/TCP my-cluster-kafka-external-1 LoadBalancer 172.30.164.89 9095:32544/TCP my-cluster-kafka-external-2 LoadBalancer 172.30.73.151 9095:32504/TCP my-cluster-kafka-external-bootstrap LoadBalancer 172.30.30.228 9095:30371/TCP NAME EXTERNAL-IP (loadbalancer) my-cluster-kafka-external-0 a8a519e464b924000b6c0f0a05e19f0d-1132975133.us-west-2.elb.amazonaws.com my-cluster-kafka-external-1 ab6adc22b556343afb0db5ea05d07347-611832211.us-west-2.elb.amazonaws.com my-cluster-kafka-external-2 a9173e8ccb1914778aeb17eca98713c0-777597560.us-west-2.elb.amazonaws.com my-cluster-kafka-external-bootstrap a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com
The bootstrap address used for client connection is propagated to the
status
of theKafka
resource.Example status for the bootstrap address
status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready listeners: # ... - addresses: - host: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com port: 9095 bootstrapServers: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9095 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external type: external observedGeneration: 2 # ...
The DNS addresses used for client connection are propagated to the
status
of each loadbalancer service.Example status for the bootstrap loadbalancer
status: loadBalancer: ingress: - hostname: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com # ...
Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the
Kafka
resource.oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}' a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9095
Extract the cluster CA certificate.
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Configure your client to connect to the brokers.
-
Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example,
a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9095
. Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection.
If you enabled a client authentication mechanism, you will also need to configure it in your client.
-
Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example,
If you are using your own listener certificates, check whether you need to add the CA certificate to the client’s truststore configuration. If it is a public (external) CA, you usually won’t need to add it.
13.6. Accessing Kafka using OpenShift routes
Use OpenShift routes to access an AMQ Streams Kafka cluster from clients outside the OpenShift cluster.
To be able to use routes, add configuration for a route
type listener in the Kafka
custom resource. When applied, the configuration creates a dedicated route and service for an external bootstrap and each broker in the cluster. Clients connect to the bootstrap route, which routes them through the bootstrap service to connect to a broker. Per-broker connections are then established using DNS names, which route traffic from the client to the broker through the broker-specific routes and services.
To connect to a broker, you specify a hostname for the route bootstrap address, as well as the certificate used for TLS encryption. For access using routes, the port is always 443.
An OpenShift route address comprises the name of the Kafka cluster, the name of the listener, and the name of the project it is created in. For example, my-cluster-kafka-external-bootstrap-myproject
(<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters.
The procedure shows basic listener configuration. TLS encryption (tls
) must be enabled. You can also specify a client authentication mechanism (authentication
). Add additional configuration using configuration
properties. For example, you can use the host
configuration property with route
listeners to specify the hostnames used by the bootstrap and per-broker services.
For more information on listener configuration, see the GenericKafkaListener
schema reference.
TLS passthrough
TLS passthrough is enabled for routes created by AMQ Streams. Kafka uses a binary protocol over TCP, but routes are designed to work with a HTTP protocol. To be able to route TCP traffic through routes, AMQ Streams uses TLS passthrough with Server Name Indication (SNI).
SNI helps with identifying and passing connection to Kafka brokers. In passthrough mode, TLS encryption is always used. Because the connection passes to the brokers, the listeners use TLS certificates signed by the internal cluster CA and not the ingress certificates. To configure listeners to use your own listener certificates, use the brokerCertChainAndKey
property.
Prerequisites
- A running Cluster Operator
In this procedure, the Kafka cluster name is my-cluster
. The name of the listener is external
.
Procedure
Configure a
Kafka
resource with an external listener set to theroute
type.For example:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: external port: 9094 type: route tls: true 1 authentication: type: tls # ... # ... zookeeper: # ...
- 1
- For
route
type listeners, TLS encryption must be enabled (true
).
Create or update the resource.
oc apply -f <kafka_configuration_file>
A cluster CA certificate to verify the identity of the kafka brokers is created in the secret
my-cluster-cluster-ca-cert
.ClusterIP
type services are created for each Kafka broker, as well as an external bootstrap service.A
route
is also created for each service, with a DNS address (host/port) to expose them using the default OpenShift HAProxy router.The routes are preconfigured with TLS passthrough.
Routes created for the bootstraps and brokers
NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-external-0 my-cluster-kafka-external-0-my-project.router.com my-cluster-kafka-external-0 9094 passthrough my-cluster-kafka-external-1 my-cluster-kafka-external-1-my-project.router.com my-cluster-kafka-external-1 9094 passthrough my-cluster-kafka-external-2 my-cluster-kafka-external-2-my-project.router.com my-cluster-kafka-external-2 9094 passthrough my-cluster-kafka-external-bootstrap my-cluster-kafka-external-bootstrap-my-project.router.com my-cluster-kafka-external-bootstrap 9094 passthrough
The DNS addresses used for client connection are propagated to the
status
of each route.Example status for the bootstrap route
status: ingress: - host: >- my-cluster-kafka-external-bootstrap-my-project.router.com # ...
Use a target broker to check the client-server TLS connection on port 443 using the OpenSSL
s_client
.openssl s_client -connect my-cluster-kafka-external-0-my-project.router.com:443 -servername my-cluster-kafka-external-0-my-project.router.com -showcerts
The server name is the SNI for passing the connection to the broker.
If the connection is successful, the certificates for the broker are returned.
Certificates for the broker
Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0
Retrieve the address of the bootstrap service from the status of the
Kafka
resource.oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}' my-cluster-kafka-external-bootstrap-my-project.router.com:443
The address comprises the cluster name, the listener name, the project name and the domain of the router (
router.com
in this example).Extract the cluster CA certificate.
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Configure your client to connect to the brokers.
- Specify the address for the bootstrap service and port 443 in your Kafka client as the bootstrap address to connect to the Kafka cluster.
Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection.
If you enabled a client authentication mechanism, you will also need to configure it in your client.
If you are using your own listener certificates, check whether you need to add the CA certificate to the client’s truststore configuration. If it is a public (external) CA, you usually won’t need to add it.
Chapter 14. Managing secure access to Kafka
Secure your Kafka cluster by managing the access a client has to Kafka brokers. Specify configuration options to secure Kafka brokers and clients
A secure connection between Kafka brokers and clients can encompass the following:
- Encryption for data exchange
- Authentication to prove identity
- Authorization to allow or decline actions executed by users
The authentication and authorization mechanisms specified for a client must match those specified for the Kafka brokers. AMQ Streams operators automate the configuration process and create the certificates required for authentication. The Cluster Operator automatically sets up TLS certificates for data encryption and authentication within your cluster.
14.1. Security options for Kafka
Use the Kafka
resource to configure the mechanisms used for Kafka authentication and authorization.
14.1.1. Listener authentication
Configure client authentication for Kafka brokers when creating listeners. Specify the listener authentication type using the Kafka.spec.kafka.listeners.authentication
property in the Kafka
resource.
For clients inside the OpenShift cluster, you can create plain
(without encryption) or tls
internal listeners. The internal
listener type use a headless service and the DNS names given to the broker pods. As an alternative to the headless service, you can also create a cluster-ip
type of internal listener to expose Kafka using per-broker ClusterIP
services. For clients outside the OpenShift cluster, you create external listeners and specify a connection mechanism, which can be nodeport
, loadbalancer
, ingress
(Kubernetes only), or route
(OpenShift only).
For more information on the configuration options for connecting an external client, see Chapter 13, Setting up client access to a Kafka cluster.
Supported authentication options:
- mTLS authentication (only on the listeners with TLS enabled encryption)
- SCRAM-SHA-512 authentication
- OAuth 2.0 token-based authentication
- Custom authentication
The authentication option you choose depends on how you wish to authenticate client access to Kafka brokers.
Try exploring the standard authentication options before using custom authentication. Custom authentication allows for any type of kafka-supported authentication. It can provide more flexibility, but also adds complexity.
Figure 14.1. Kafka listener authentication options

The listener authentication
property is used to specify an authentication mechanism specific to that listener.
If no authentication
property is specified then the listener does not authenticate clients which connect through that listener. The listener will accept all connections without authentication.
Authentication must be configured when using the User Operator to manage KafkaUsers
.
The following example shows:
-
A
plain
listener configured for SCRAM-SHA-512 authentication -
A
tls
listener with mTLS authentication -
An
external
listener with mTLS authentication
Each listener is configured with a unique name and port within a Kafka cluster.
When configuring listeners for client access to brokers, you can use port 9092 or higher (9093, 9094, and so on), but with a few exceptions. The listeners cannot be configured to use the ports reserved for interbroker communication (9090 and 9091), Prometheus metrics (9404), and JMX (Java Management Extensions) monitoring (9999).
Example listener authentication configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls # ...
14.1.1.1. mTLS authentication
mTLS authentication is always used for the communication between Kafka brokers and ZooKeeper pods.
AMQ Streams can configure Kafka to use TLS (Transport Layer Security) to provide encrypted communication between Kafka brokers and clients either with or without mutual authentication. For mutual, or two-way, authentication, both the server and the client present certificates. When you configure mTLS authentication, the broker authenticates the client (client authentication) and the client authenticates the broker (server authentication).
mTLS listener configuration in the Kafka
resource requires the following:
-
tls: true
to specify TLS encryption and server authentication -
authentication.type: tls
to specify the client authentication
When a Kafka cluster is created by the Cluster Operator, it creates a new secret with the name <cluster_name>-cluster-ca-cert
. The secret contains a CA certificate. The CA certificate is in PEM and PKCS #12 format. To verify a Kafka cluster, add the CA certificate to the truststore in your client configuration. To verify a client, add a user certificate and key to the keystore in your client configuration. For more information on configuring a client for mTLS, see Section 14.2.2, “User authentication”.
TLS authentication is more commonly one-way, with one party authenticating the identity of another. For example, when HTTPS is used between a web browser and a web server, the browser obtains proof of the identity of the web server.
14.1.1.2. SCRAM-SHA-512 authentication
SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish mutual authentication using passwords. AMQ Streams can configure Kafka to use SASL (Simple Authentication and Security Layer) SCRAM-SHA-512 to provide authentication on both unencrypted and encrypted client connections.
When SCRAM-SHA-512 authentication is used with a TLS connection, the TLS protocol provides the encryption, but is not used for authentication.
The following properties of SCRAM make it safe to use SCRAM-SHA-512 even on unencrypted connections:
- The passwords are not sent in the clear over the communication channel. Instead the client and the server are each challenged by the other to offer proof that they know the password of the authenticating user.
- The server and client each generate a new challenge for each authentication exchange. This means that the exchange is resilient against replay attacks.
When KafkaUser.spec.authentication.type
is configured with scram-sha-512
the User Operator will generate a random 12-character password consisting of upper and lowercase ASCII letters and numbers.
14.1.1.3. Network policies
By default, AMQ Streams automatically creates a NetworkPolicy
resource for every listener that is enabled on a Kafka broker. This NetworkPolicy
allows applications to connect to listeners in all namespaces. Use network policies as part of the listener configuration.
If you want to restrict access to a listener at the network level to only selected applications or namespaces, use the networkPolicyPeers
property. Each listener can have a different networkPolicyPeers
configuration. For more information on network policy peers, refer to the NetworkPolicyPeer API reference.
If you want to use custom network policies, you can set the STRIMZI_NETWORK_POLICY_GENERATION
environment variable to false
in the Cluster Operator configuration. For more information, see Section 8.5, “Configuring the Cluster Operator”.
Your configuration of OpenShift must support ingress NetworkPolicies
in order to use network policies in AMQ Streams.
14.1.1.4. Providing listener certificates
You can provide your own server certificates, called Kafka listener certificates, for TLS listeners or external listeners which have TLS encryption enabled. For more information, see Section 14.3.4, “Providing your own Kafka listener certificates for TLS encryption”.
Additional resources
14.1.2. Kafka authorization
Configure authorization for Kafka brokers using the Kafka.spec.kafka.authorization
property in the Kafka
resource. If the authorization
property is missing, no authorization is enabled and clients have no restrictions. When enabled, authorization is applied to all enabled listeners. The authorization method is defined in the type
field.
Supported authorization options:
- Simple authorization
- OAuth 2.0 authorization (if you are using OAuth 2.0 token based authentication)
- Open Policy Agent (OPA) authorization
- Custom authorization
Figure 14.2. Kafka cluster authorization options

14.1.2.1. Super users
Super users can access all resources in your Kafka cluster regardless of any access restrictions, and are supported by all authorization mechanisms.
To designate super users for a Kafka cluster, add a list of user principals to the superUsers
property. If a user uses mTLS authentication, the username is the common name from the TLS certificate subject prefixed with CN=
. If you are not using the User Operator and using your own certificates for mTLS, the username is the full certificate subject. A full certificate subject can have the following fields: CN=user,OU=my_ou,O=my_org,L=my_location,ST=my_state,C=my_country_code
. Omit any fields that are not present.
An example configuration with super users
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 - CN=client_4,OU=my_ou,O=my_org,L=my_location,ST=my_state,C=US - CN=client_5,OU=my_ou,O=my_org,C=GB - CN=client_6,O=my_org # ...
14.2. Security options for Kafka clients
Use the KafkaUser
resource to configure the authentication mechanism, authorization mechanism, and access rights for Kafka clients. In terms of configuring security, clients are represented as users.
You can authenticate and authorize user access to Kafka brokers. Authentication permits access, and authorization constrains the access to permissible actions.
You can also create super users that have unconstrained access to Kafka brokers.
The authentication and authorization mechanisms must match the specification for the listener used to access the Kafka brokers.
For more information on configuring a KafkaUser
resource to access Kafka brokers securely, see Section 13.3, “Setting up client access to a Kafka cluster using listeners”.
14.2.1. Identifying a Kafka cluster for user handling
A KafkaUser
resource includes a label that defines the appropriate name of the Kafka cluster (derived from the name of the Kafka
resource) to which it belongs.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster
The label is used by the User Operator to identify the KafkaUser
resource and create a new user, and also in subsequent handling of the user.
If the label does not match the Kafka cluster, the User Operator cannot identify the KafkaUser
and the user is not created.
If the status of the KafkaUser
resource remains empty, check your label.
14.2.2. User authentication
Use the KafkaUser
custom resource to configure authentication credentials for users (clients) that require access to a Kafka cluster. Configure the credentials using the authentication
property in KafkaUser.spec
. By specifying a type
, you control what credentials are generated.
Supported authentication types:
-
tls
for mTLS authentication -
tls-external
for mTLS authentication using external certificates -
scram-sha-512
for SCRAM-SHA-512 authentication
If tls
or scram-sha-512
is specified, the User Operator creates authentication credentials when it creates the user. If tls-external
is specified, the user still uses mTLS, but no authentication credentials are created. Use this option when you’re providing your own certificates. When no authentication type is specified, the User Operator does not create the user or its credentials.
You can use tls-external
to authenticate with mTLS using a certificate issued outside the User Operator. The User Operator does not generate a TLS certificate or a secret. You can still manage ACL rules and quotas through the User Operator in the same way as when you’re using the tls
mechanism. This means that you use the CN=USER-NAME
format when specifying ACL rules and quotas. USER-NAME is the common name given in a TLS certificate.
14.2.2.1. mTLS authentication
To use mTLS authentication, you set the type
field in the KafkaUser
resource to tls
.
Example user with mTLS authentication enabled
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls # ...
The authentication type must match the equivalent configuration for the Kafka
listener used to access the Kafka cluster.
When the user is created by the User Operator, it creates a new secret with the same name as the KafkaUser
resource. The secret contains a private and public key for mTLS. The public key is contained in a user certificate, which is signed by a clients CA (certificate authority) when it is created. All keys are in X.509 format.
If you are using the clients CA generated by the Cluster Operator, the user certificates generated by the User Operator are also renewed when the clients CA is renewed by the Cluster Operator.
The user secret provides keys and certificates in PEM and PKCS #12 formats.
Example secret with user credentials
apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store
When you configure a client, you specify the following:
- Truststore properties for the public cluster CA certificate to verify the identity of the Kafka cluster
- Keystore properties for the user authentication credentials to verify the client
The configuration depends on the file format (PEM or PKCS #12). This example uses PKCS #12 stores, and the passwords required to access the credentials in the stores.
Example client configuration using mTLS in PKCS #12 format
bootstrap.servers=<kafka_cluster_name>-kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password=<truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password=<keystore_password> 6
- 1
- The bootstrap server address to connect to the Kafka cluster.
- 2
- The security protocol option when using TLS for encryption.
- 3
- The truststore location contains the public key certificate (
ca.p12
) for the Kafka cluster. A cluster CA certificate and password is generated by the Cluster Operator in the<cluster_name>-cluster-ca-cert
secret when the Kafka cluster is created. - 4
- The password (
ca.password
) for accessing the truststore. - 5
- The keystore location contains the public key certificate (
user.p12
) for the Kafka user. - 6
- The password (
user.password
) for accessing the keystore.
14.2.2.2. mTLS authentication using a certificate issued outside the User Operator
To use mTLS authentication using a certificate issued outside the User Oper