Chapter 1. Features
AMQ Streams version 1.7 is based on Strimzi 0.22.x.
The features added in this release, and that were not in previous releases of AMQ Streams, are outlined below.
To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project.
1.1. OpenShift Container Platform support
AMQ Streams 1.7 is supported on OpenShift Container Platform 4.6 and 4.7.
For more information about the supported platform versions, see the Red Hat Knowledgebase article Red Hat AMQ 7 Supported Configurations.
1.2. Kafka 2.7.0 support
AMQ Streams now supports Apache Kafka version 2.7.0.
AMQ Streams uses Kafka 2.7.0. Only Kafka distributions built by Red Hat are supported.
You must upgrade the Cluster Operator to AMQ Streams version 1.7 before you can upgrade brokers and client applications to Kafka 2.7.0. For upgrade instructions, see Upgrading AMQ Streams.
Refer to the Kafka 2.6.0 and Kafka 2.7.0 Release Notes for additional information.
Kafka 2.6.x is supported only for the purpose of upgrading to AMQ Streams 1.7.
For more information on supported versions, see the Red Hat AMQ 7 Component Details Page on the Customer Portal.
Kafka 2.7.0 requires the same ZooKeeper version as Kafka 2.6.x (ZooKeeper version 3.5.8). Therefore, the Cluster Operator does not perform a ZooKeeper upgrade when you upgrade from AMQ Streams 1.6 to AMQ Streams 1.7.
1.3. Introducing the v1beta2 API version
AMQ Streams 1.7 introduces the v1beta2
API version, which updates the schemas of the AMQ Streams custom resources. Older API versions are now deprecated.
After you have upgraded to AMQ Streams 1.7, you must upgrade your custom resources to use API version v1beta2
. You can do this any time after upgrading, but the upgrades must be completed before the next AMQ Streams minor version update (AMQ Streams 1.8).
The Red Hat AMQ Streams 1.7.0 API Conversion Tool is provided to support the upgrade of custom resources. You can download the API conversion tool from the AMQ Streams download site. Instructions for using the tool are included in the documentation and the provided readme.
Upgrading custom resources to v1beta2
prepares AMQ Streams for Kubernetes CRD v1
, which will be required for Kubernetes 1.22.
1.3.1. Upgrading custom resources to v1beta2
You perform the custom resources upgrades in two steps.
Step one: Convert the format of custom resources
Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2
in one of two ways:
- Converting the YAML files that describe the configuration for AMQ Streams custom resources
- Converting AMQ Streams custom resources directly in the cluster
Alternatively, you can manually convert each custom resource into a format applicable to v1beta2
. Instructions for manually converting custom resources are included in the documentation.
Step two: Upgrade CRDs to v1beta2
Next, using the API conversion tool with the crd-upgrade
command, you must set v1beta2
as the storage API version in your CRDs. You cannot perform this step manually.
For full instructions, see AMQ Streams custom resource upgrades
1.3.2. kubectl apply
commands with v1beta2
After upgrading custom resources to v1beta2
, the kubectl apply
command no longer works when performing some tasks. You need to use alternative commands in order to accommodate the larger file sizes of v1beta2
custom resources.
Use kubectl create -f
instead of kubectl apply -f
when deploying the AMQ Streams Operators.
Use kubectl replace -f
instead of kubectl apply -f
when:
- Upgrading the Cluster Operator
- Downgrading the Cluster Operator to a previous version
If the custom resource you are creating already exists (for example, if it is already installed through a different namespace) use kubectl replace
. If the custom resource does not exist, use kubectl create
.
1.4. Multi-version product upgrades
You can now upgrade from a previous AMQ Streams version directly to the latest AMQ Streams version within a single upgrade. For example, upgrading from AMQ Streams 1.5 directly to AMQ Streams 1.7, skipping intermediate versions.
1.5. Kafka Connect build configuration
You can now use build
configuration so that AMQ Streams automatically builds a container image with the connector plugins you require for your data connections.
For AMQ Streams to create the new image automatically, the build
configuration requires output
properties to reference a container registry that stores the container image, and plugins
properties to list the connector plugins and their artifacts to add to the image.
A build
configuration can also reference an imagestream. Imagestreams reference a container image stored in OpenShift Container Platform’s integrated registry.
The output
properties describe the type and name of the image, and optionally the name of the Secret containing the credentials needed to access the container registry. The plugins
properties describe the type of artifact and the URL from which the artifact is downloaded. Additionally, you can specify a SHA-512 checksum to verify the artifact before unpacking it.
Example Kafka Connect configuration to create a new image automatically
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # ... build: output: type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: - name: debezium-postgres-connector artifacts: - type: tgz url: https://ARTIFACT-ADDRESS.tgz sha512sum: HASH-NUMBER-TO-VERIFY-ARTIFACT # ... #...
Support for Kafka Connect with Source-to-Image (S2I) is deprecated, as described in Chapter 4, Deprecated features.
See:
1.6. Metrics configuration
Metrics are now configured using a ConfigMap that is automatically created when a custom resource is deployed.
Use the metricsConfig
property to enable and configure Prometheus metrics for Kafka components. The metricsConfig
property contains a reference to a ConfigMap containing additional configuration for the Prometheus JMX exporter.
Example metrics configuration for Kafka
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # ... zookeeper: # ...
The ConfigMap stores the YAML configuration for the JMX Prometheus exporter under a key.
Example ConfigMap with metrics configuration for Kafka
kind: ConfigMap apiVersion: v1 metadata: name: my-configmap data: my-key: | lowercaseOutputName: true rules: # Special cases and very specific rules - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_$1_$2 type: GAUGE labels: clientId: "$3" topic: "$4" partition: "$5" # further configuration
To enable Prometheus metrics export without further configuration, you can reference a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key
. When referencing an empty file, all metrics are exposed as long as they have not been renamed.
The spec.metrics
property is deprecated, as described in Chapter 4, Deprecated features.
1.7. Debezium for change data capture integration
Red Hat Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.
Debezium has multiple uses, including:
- Data replication
- Updating caches and search indexes
- Simplifying monolithic applications
- Data integration
- Enabling streaming queries
Debezium provides connectors (based on Kafka Connect) for the following common databases:
- Db2
- MongoDB
- MySQL
- PostgreSQL
- SQL Server
For more information on deploying Debezium with AMQ Streams, refer to the product documentation.
1.8. Service Registry
You can use Service Registry as a centralized store of service schemas for data streaming. For Kafka, you can use Service Registry to store Apache Avro or JSON schema.
Service Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.
Using Service Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.
For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.
Kafka client applications can push or pull their schemas from Service Registry at runtime.
For more information on using Service Registry with AMQ Streams, refer to the product documentation.