Chapter 1. Features
AMQ Streams version 1.8 is based on Strimzi 0.24.x.
The features added in this release, and that were not in previous releases of AMQ Streams, are outlined below.
To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project.
1.1. OpenShift Container Platform support
AMQ Streams 1.8 is supported on OpenShift Container Platform 4.6 and 4.8.
For more information about the supported platform versions, see the Red Hat Knowledgebase article Red Hat AMQ 7 Supported Configurations.
1.2. Kafka 2.8.0 support
AMQ Streams now supports Apache Kafka version 2.8.0.
AMQ Streams uses Kafka 2.8.0. Only Kafka distributions built by Red Hat are supported.
You must upgrade the Cluster Operator to AMQ Streams version 1.8 before you can upgrade brokers and client applications to Kafka 2.8.0. For upgrade instructions, see Upgrading AMQ Streams.
Kafka 2.7.x is supported only for the purpose of upgrading to AMQ Streams 1.8.
For more information on supported versions, see the Red Hat AMQ 7 Component Details Page on the Customer Portal.
Kafka version 2.8.0 requires ZooKeeper version 3.5.9. Therefore, the Cluster Operator performs a ZooKeeper upgrade when you upgrade from AMQ Streams 1.7 to AMQ Streams 1.8.
1.3. Feature gates to toggle features on and off
As a Kafka cluster administrator, you can now toggle a subset of features on and off using feature gates in the operator’s deployment configuration. Feature gates are currently available only for the Cluster Operator; future releases might add feature gates to other operators.
AMQ Streams 1.8 introduces the following feature gates and associated new features:
Feature gates have a default state of enabled or disabled. When enabled, a feature gate changes the behavior of the operator and enables the feature in your AMQ Streams deployment.
Feature gates have a maturity level of Alpha, Beta, or Generally Available (GA).
Table 1.1. Maturity levels of feature gates
|Feature gate maturity level||Description||Default state|
Controls features that might be experimental, unstable, or not sufficiently tested for production use. These features are subject to change in future releases.
Controls features that are well tested. These features are not likely to change in future releases.
General Availability (GA)
Controls features that are stable, well tested, and suitable for production use. GA features will not change in future releases.
Configuring feature gates
In the Cluster Operator’s deployment configuration, in the
STRIMZI_FEATURE_GATES environment variable, specify a comma-separated list of feature gate names and prefixes. A
+ prefix enables the feature gate and a
- prefix disables it.
Example: Enabling the Control Plane Listener feature gate
Edit the Deployment for the Cluster Operator:
oc edit deployment strimzi-cluster-operator
STRIMZI_FEATURE_GATESenvironment variable with a value of
# ... env: #... - name: STRIMZI_FEATURE_GATES value: +ControlPlaneListener #...
1.4. Control plane listeners
This feature is controlled using the
ControlPlaneListener feature gate, which is in alpha stage and disabled by default. For more information, see Feature gates.
In a standard AMQ Streams cluster, control plane traffic and data plane traffic both use the same inter-broker listener on port 9091.
With this release, you can configure your cluster so that control plane traffic uses a dedicated control plane listener on port 9090. Data plane traffic continues to use the listener on port 9091.
Using control plane listeners might improve performance because important controller connections, such as partition leadership changes, are not delayed by data replication across brokers. The majority of data plane traffic consists of this data replication.
1.5. Service account patching
This feature is controlled using the
ServiceAccountPatching feature gate, which is in alpha stage and disabled by default. For more information, see Feature gates.
By default, the Cluster Operator does not update service accounts.
With this release, you can enable updates to service accounts to be applied in every reconciliation. For example, the Cluster Operator can apply custom labels or annotations to the service account. Custom labels and annotations are configured for custom resources using the
Example custom labels and annotations
# ... template: serviceAccount: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ...
1.6. Debezium for change data capture integration
Red Hat Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.
Debezium has multiple uses, including:
- Data replication
- Updating caches and search indexes
- Simplifying monolithic applications
- Data integration
- Enabling streaming queries
Debezium provides connectors (based on Kafka Connect) for the following common databases:
- SQL Server
For more information on deploying Debezium with AMQ Streams, refer to the product documentation.
1.7. Service Registry
You can use Service Registry as a centralized store of service schemas for data streaming. For Kafka, you can use Service Registry to store Apache Avro or JSON schema.
Service Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.
Using Service Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.
For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.
Kafka client applications can push or pull their schemas from Service Registry at runtime.
For more information on using Service Registry with AMQ Streams, refer to the Service Registry documentation.