Chapter 1. Features

AMQ Streams 2.2 and subsequent patch releases introduce the features described in this section.

AMQ Streams 2.2 on OpenShift is based on Kafka 3.2.3 and Strimzi 0.29.x.

Note

To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project.

1.1. AMQ Streams 2.2.x (Long Term Support)

AMQ Streams 2.2.x is the Long Term Support (LTS) offering for AMQ Streams.

The latest patch release is AMQ Streams 2.2.1. The AMQ Streams product images have changed to version 2.2.1. The supported Kafka version remains at 3.2.3.

For information on the LTS terms and dates, see the AMQ Streams LTS Support Policy.

1.2. OpenShift Container Platform support

AMQ Streams 2.2 is supported on OpenShift Container Platform 4.8 to 4.11.

For more information about the supported platform versions, see the AMQ Streams Supported Configurations.

1.3. Kafka 3.2.3 support

AMQ Streams now supports Apache Kafka version 3.2.3.

AMQ Streams uses Kafka 3.2.3. Only Kafka distributions built by Red Hat are supported.

You must upgrade the Cluster Operator to AMQ Streams version 2.2 before you can upgrade brokers and client applications to Kafka 3.2.3. For upgrade instructions, see Upgrading AMQ Streams.

Refer to the Kafka 3.1.0, Kafka 3.2.0, Kafka 3.2.1, and Kafka 3.2.3 Release Notes for additional information.

Note

Kafka 3.1.x is supported only for the purpose of upgrading to AMQ Streams 2.2.

For more information on supported versions, see the AMQ Streams Component Details.

Kafka 3.2.3 uses ZooKeeper version 3.6.3, which is the same version as Kafka 3.1.0.

1.4. Supporting the v1beta2 API version

The v1beta2 API version for all custom resources was introduced with AMQ Streams 1.7. For AMQ Streams 1.8, v1alpha1 and v1beta1 API versions were removed from all AMQ Streams custom resources apart from KafkaTopic and KafkaUser.

Upgrade of the custom resources to v1beta2 prepares AMQ Streams for a move to Kubernetes CRD v1, which is required for Kubernetes v1.22.

If you are upgrading from an AMQ Streams version prior to version 1.7:

  1. Upgrade to AMQ Streams 1.7
  2. Convert the custom resources to v1beta2
  3. Upgrade to AMQ Streams 1.8
Important

You must upgrade your custom resources to use API version v1beta2 before upgrading to AMQ Streams version 2.2.

See Deploying and upgrading AMQ Streams.

1.4.1. Upgrading custom resources to v1beta2

To support the upgrade of custom resources to v1beta2, AMQ Streams provides an API conversion tool, which you can download from the AMQ Streams software downloads page.

You perform the custom resources upgrades in two steps.

Step one: Convert the format of custom resources

Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways:

  • Converting the YAML files that describe the configuration for AMQ Streams custom resources
  • Converting AMQ Streams custom resources directly in the cluster

Alternatively, you can manually convert each custom resource into a format applicable to v1beta2. Instructions for manually converting custom resources are included in the documentation.

Step two: Upgrade CRDs to v1beta2

Next, using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually.

For full instructions, see Upgrading AMQ Streams.

1.5. Support for IBM Z and LinuxONE architecture

AMQ Streams 2.2 is enabled to run on IBM Z and LinuxONE s390x architecture.

Support for IBM Z and LinuxONE applies to AMQ Streams running with Kafka on OpenShift Container Platform 4.10 and later.

1.5.1. Requirements for IBM Z and LinuxONE

  • OpenShift Container Platform 4.10 and later

1.5.2. Unsupported on IBM Z and LinuxONE

  • AMQ Streams on disconnected OpenShift Container Platform environments
  • AMQ Streams OPA integration

1.6. Support for IBM Power architecture

AMQ Streams 2.2 is enabled to run on IBM Power ppc64le architecture.

Support for IBM Power applies to AMQ Streams running with Kafka on OpenShift Container Platform 4.9 and later.

1.6.1. Requirements for IBM Power

  • OpenShift Container Platform 4.9 and later

1.6.2. Unsupported on IBM Power

  • AMQ Streams on disconnected OpenShift Container Platform environments

1.7. UseStrimziPodSets feature gate (technology preview)

The UseStrimziPodSets feature gate controls a resource for managing pods called StrimziPodSet. When the feature gate is enabled, this resource is used instead of the StatefulSets. AMQ Streams handles the creation and management of pods instead of OpenShift. Using StrimziPodSets instead of StatefulSets provides more control over the functionality.

The feature gate is at an alpha level of maturity, so should be treated as a technology preview.

The preview provides an opportunity to test the StrimziPodSet resource. The feature will be enabled by default in release 2.3.

To enable the feature gate, specify +UseStrimziPodSets as a value for the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.

Enabling the UseStrimziPodSets feature gate

env:
  - name: STRIMZI_FEATURE_GATES
    value: +UseStrimziPodSets

See UseStrimziPodSets feature gate and Feature gate releases.

1.8. UseKRaft feature gate (development preview)

As a Kafka cluster administrator, you can toggle a subset of features on and off using feature gates in the Cluster Operator deployment configuration.

Apache Kafka is in the process of phasing out the need for ZooKeeper. With the new UseKRaft feature gate enabled, you can try deploying a Kafka cluster in KRaft (Kafka Raft metadata) mode without ZooKeeper.

This feature gate is at an alpha level of maturity, but it should be treated as a development preview.

Caution

This feature gate is experimental, intended only for development and testing, and must not be enabled for a production environment.

To enable the UseKRaft feature gate, specify +UseKRaft and +USeStrimziPodSets as values for the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration. The UseKRaft feature gate depends on the UseStrimziPodSets feature gate.

Enabling the UseKRaft feature gate

env:
  - name: STRIMZI_FEATURE_GATES
    value: +UseKRaft, +USeStrimziPodSets

Currently, the KRaft mode in AMQ Streams has the following major limitations:

  • Moving from Kafka clusters with ZooKeeper to KRaft clusters or the other way around is not supported.
  • Upgrades and downgrades of Apache Kafka versions or the AMQ Streams operator are not supported. Users might need to delete the cluster, upgrade the operator and deploy a new Kafka cluster.
  • The Entity Operator (including the User Operator and Topic operator) is not supported. The spec.entityOperator property must be removed from the Kafka custom resource.
  • simple authorization is not supported.
  • SCRAM-SHA-512 authentication is not supported.
  • JBOD storage is not supported. The type: jbod storage can be used, but the JBOD array can contain only one disk.
  • Liveness and readiness probes are disabled.
  • All Kafka nodes have both the controller and broker KRaft roles. Kafka clusters with separate controller and broker nodes are not supported.

See UseKRaft feature gate and Feature gate releases.

1.9. General Availability for Cruise Control

Cruise Control moves from Technology Preview to General Availability (GA). You can deploy Cruise Control and use it to rebalance your Kafka cluster using optimization goals — defined constraints on CPU, disk, network load, and more. In a balanced Kafka cluster, the workload is more evenly distributed across the broker pods.

Cruise Control is configured and deployed as part of a Kafka resource. You can use the default optimization goals or modify them to suit your requirements. Example YAML configuration files for Cruise Control are provided in examples/cruise-control/.

When Cruise Control is deployed, you can create KafkaRebalance custom resources to:

  • Generate optimization proposals from multiple optimization goals
  • Rebalance a Kafka cluster based on an optimization proposal

Other Cruise Control features are not currently supported, including anomaly detection, notifications, write-your-own goals, and changing the topic replication factor.

See Cruise Control for cluster rebalancing.

1.10. Cruise control scaling and rebalancing modes

You can now generate optimization proposals for rebalancing operations in one of the following modes:

  • full
  • add-brokers
  • remove-brokers

Previously, the proposal was generated in full mode, where replicas might move across all brokers in a cluster. Now you use the add-brokers and remove-brokers modes to take into account scaling up and scaling down operations.

Use the add-brokers mode after scaling up. You specify new brokers and the rebalancing operation moves replicas from existing brokers to the newly added brokers. This is a faster option than rebalancing the whole cluster.

Use the remove-brokers mode before scaling down. You specify brokers you are going to remove, which means that any replicas on those brokers are moved off in the rebalance operation.

See Rebalancing modes, Generating optimization proposals, and Approving optimization proposals.

1.11. Debezium for change data capture integration

Red Hat Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.

Debezium has multiple uses, including:

  • Data replication
  • Updating caches and search indexes
  • Simplifying monolithic applications
  • Data integration
  • Enabling streaming queries

Debezium provides connectors (based on Kafka Connect) for the following common databases:

  • Db2
  • MongoDB
  • MySQL
  • PostgreSQL
  • SQL Server

For more information on deploying Debezium with AMQ Streams, refer to the product documentation.

1.12. Service Registry

You can use Service Registry as a centralized store of service schemas for data streaming. For Kafka, you can use Service Registry to store Apache Avro or JSON schema.

Service Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.

Using Service Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.

For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.

Kafka client applications can push or pull their schemas from Service Registry at runtime.

For more information on using Service Registry with AMQ Streams, refer to the Service Registry documentation.