Chapter 1. Features

AMQ Streams version 1.5 is based on Strimzi 0.18.x.

The features added in this release, and that were not in previous releases of AMQ Streams, are outlined below.

1.1. Kafka 2.5.0 support

AMQ Streams now supports Apache Kafka version 2.5.0.

AMQ Streams uses Kafka 2.5.0. Only Kafka distributions built by Red Hat are supported.

You must upgrade the Cluster Operator to AMQ Streams version 1.5 before you can upgrade brokers and client applications to Kafka 2.5.0. For upgrade instructions, see AMQ Streams and Kafka upgrades.

Refer to the Kafka 2.4.0 and Kafka 2.5.0 Release Notes for additional information.

Note

Kafka 2.4.x is supported in AMQ Streams 1.5 only for upgrade purposes.

For more information on supported versions, see the Red Hat AMQ 7 Component Details Page on the Customer Portal.

1.2. ZooKeeper 3.5.8 support

Kafka version 2.5.0 requires ZooKeeper version 3.5.8.

You do not need to manually upgrade to ZooKeeper 3.5.8; the Cluster Operator performs the ZooKeeper upgrade when it upgrades Kafka brokers. However, you might notice some additional rolling updates during this procedure.

1.3. OpenShift 4.x disconnected installation

Support for disconnected installation on OpenShift 4.x moves out of Technology Preview to a generally available component of AMQ Streams.

You can perform a disconnected installation of AMQ Streams when your OpenShift cluster is being used as a disconnected cluster on a restricted network.

For a disconnected installation, you obtain the required images and push them to your container registry locally. If you are using the Operator Lifecycle Manager (OLM) this means disabling the default sources used by the OperatorHub and creating local mirrors to install AMQ Streams from local sources.

See Using Operator Lifecycle Manager on restricted networks.

1.4. MirrorMaker 2.0

Support for MirrorMaker 2.0 moves out of Technology Preview to a generally available component of AMQ Streams.

MirrorMaker 2.0 is based on the Kafka Connect framework, with connectors managing the transfer of data between clusters.

MirrorMaker 2.0 uses:

  • Source cluster configuration to consume data from the source cluster
  • Target cluster configuration to output data to the target cluster

MirrorMaker 2.0 introduces an entirely new way of replicating data in clusters. If you choose to use MirrorMaker 2.0, there is currently no legacy support, so any resources must be manually converted into the new format.

See Using AMQ Streams with MirrorMaker 2.0.

1.5. Debezium for change data capture integration

Red Hat Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.

Debezium has multiple uses, including:

  • Data replication
  • Updating caches and search indexes
  • Simplifying monolithic applications
  • Data integration
  • Enabling streaming queries

Debezium provides connectors (based on Kafka Connect) for the following common databases:

  • MySQL
  • PostgreSQL
  • SQL Server
  • MongoDB

For more information on deploying Debezium with AMQ Streams, refer to the product documentation.

1.6. Service Registry

You can use Service Registry as a centralized store of service schemas for data streaming. For Kafka, you can use Service Registry to store Apache Avro or JSON schema.

Service Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.

Using Service Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.

For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.

Kafka client applications can push or pull their schemas from Service Registry at runtime.

See Managing schemas with Service Registry.