Release Notes for Red Hat Integration 2022.q3
What's new in Red Hat Integration
Abstract
Chapter 1. Red Hat Integration
Red Hat Integration is a comprehensive set of integration and event processing technologies for creating, extending, and deploying container-based integration services across hybrid and multicloud environments. Red Hat Integration provides an agile, distributed, and API-centric solution that organizations can use to connect and share data between applications and systems required in a digital world.
Red Hat Integration includes the following capabilities:
- Real-time messaging
- Cross-datacenter message streaming
- API connectivity
- Application connectors
- Enterprise integration patterns
- API management
- Data transformation
- Service composition and orchestration
Additional resources
Chapter 2. Camel Extensions for Quarkus release notes
2.1. Camel Extensions for Quarkus features
- Fast startup and low RSS memory
- Using the optimized build-time and ahead-of-time (AOT) compilation features of Quarkus, your Camel application can be pre-configured at build time resulting in fast startup times.
- Application generator
- Use the Quarkus application generator to bootstrap your application and discover its extension ecosystem.
- Highly configurable
All of the important aspects of a Camel Extensions for Quarkus application can be set up programmatically with CDI (Contexts and Dependency Injection) or via configuration properties. By default, a CamelContext is configured and automatically started for you.
Check out the Configuring your Quarkus applications guide for more information on the different ways to bootstrap and configure an application.
- Integrates with existing Quarkus extensions
- Camel Extensions for Quarkus provides extensions for libraries and frameworks that are used by some Camel components which inherit native support and configuration options.
2.2. Supported platforms, configurations, databases, and extensions
- For information about supported platforms, configurations, and databases in Camel Extensions for Quarkus version 2.7.1, see the Supported Configuration page on the Customer Portal (login required).
- For a list of Red Hat Camel Extensions for Quarkus extensions and the Red Hat support level for each extension, see the Extensions Overview chapter of the Camel Extensions for Quarkus Reference (login required).
2.3. Technology preview extensions
Red Hat does not provide support for Technology Preview components provided with this release of Camel Extensions for Quarkus. Items designated as Technology Preview in the Extensions Overview chapter of the Camel Extensions for Quarkus Reference have limited supportability, as defined by the Technology Preview Features Support Scope.
2.4. Important notes
- Native mode support for extensions
- Camel Extensions for Quarkus version 2.7.1 introduces native mode support for many extensions. See the Extensions Overview chapter of the Camel Extensions for Quarkus Reference for a full list of extensions and their support levels.
- Camel upgraded from version 3.11.5 to version 3.14.2
Camel Extensions for Quarkus version 2.7.1 has been upgraded from Camel version 3.11.5 to Camel version 3.14.2. For additional information about each intervening Camel patch release, please see the following:
- Camel Quarkus upgraded from version 2.2 to version 2.7
Camel Extensions for Quarkus version 2.7.1 has been upgraded from Camel Quarkus version 2.2 to Camel Quarkus version 2.7. For additional information about each intervening Camel Quarkus patch release, please see the following:
- Elasticsearch Rest extension has been removed from Camel Extensions for Quarkus
The
camel-quarkus-elasticsearch-restextension was deprecated in Camel Extensions for Quarkus version 2.2.1 and has been removed in this release.NoteThe
camel-quarkus-elasticsearch-restis still available using thecode.quarkus.redhat.comonline project generator tool but only with community support.
2.5. Resolved issues
The following table lists known issues that were affecting Camel Extensions for Quarkus which have been fixed in Camel Extensions for Quarkus version 2.7.1.
Table 2.1. Camel Extensions for Quarkus 2.7.1 Resolved issues
| Issue | Description |
|---|---|
| AWS2 SQS When sending messages to a queue that has delay, the delay is not respected. | |
|
Missing productized |
For more details of other issues resolved between Camel Quarkus 2.2 and Camel Quarkus 2.7, see the Release Notes for each patch release.
2.6. Additional resources
2.7. Migration from Camel Extensions for Quarkus 2.2 to 2.7
Camel Extensions for Quarkus 2.7 includes new features with breaking changes from the previous Camel Extensions for Quarkus 2.2 release. This section describes the major changes between Camel Extensions for Quarkus 2.2 and version 2.7.
Because of the breaking changes in 2.7, there is no automatic upgrade and a migration process is required. You must also review your existing Camel Extensions for Quarkus applications and update their configuration to meet the new requirements.
When migrating to version 2.7, you must take the following major changes into account:
2.7.1. Graceful shutdown strategy is used by default
In previous releases, graceful shutdown strategy (default strategy in Camel) wasn’t used by default. Shutdown wasn’t controlled by any strategy.
This is no longer the case. Graceful shutdown strategy is enabled by default with one exception only. If timeout for graceful shutdown is not set and application runs in a development mode, no shutdown strategy is used (behavior for the development mode without timeout for graceful timeout wasn’t changed).
DefaultShutdownStrategy could be configured via application.properties. For example:
#set graceful timeout to 15 seconds camel.main.shutdownTimeout = 15
2.7.2. Deprecated vertx-kafka extension has been removed
The deprecated vertx-kafka extension has been removed. Any routes using this component will need modifying to use the kafka extension.
2.7.3. Removal of configuration property quarkus.camel.main.enabled
Camel Main has been enabled as the default bootstrap mode since Camel Quarkus 1.8.0. The configuration property quarkus.camel.main.enabled has now been removed as there are no major benefits to disabling Camel Main.
2.7.4. Removal of @BuildTimeAvroDataFormat
The deprecated @BuildTimeAvroDataFormat annotation has been removed. Users must use the parsing of Avro schema at build time as described in the Camel Quarkus Avro Extension documentation.
Chapter 3. Debezium release notes
Debezium is a distributed change data capture platform that captures row-level changes that occur in database tables and then passes corresponding change event records to Apache Kafka topics. Applications can read these change event streams and access the change events in the order in which they occurred. Debezium is built on Apache Kafka and is deployed and integrated with AMQ Streams on OpenShift Container Platform or on Red Hat Enterprise Linux.
This 2022.Q3 release of Debezium is based on the Debezium community 1.9.5.Final release.
The following topics provide release details:
3.1. Debezium database connectors
Debezium provides connectors based on Kafka Connect for the following common databases:
- Db2
- MongoDB
- MySQL
- Oracle
- PostgreSQL
- SQL Server
3.1.1. Connector usage notes
Db2
-
The Debezium Db2 connector does not include the Db2 JDBC driver (
jcc-11.5.0.0.jar). See the deployment instructions for information about how to deploy the necessary JDBC driver. - The Db2 connector requires the use of the abstract syntax notation (ASN) libraries, which are available as a standard part of Db2 for Linux.
- To use the ASN libraries, you must have a license for IBM InfoSphere Data Replication (IIDR). You do not have to install IIDR to use the libraries.
-
The Debezium Db2 connector does not include the Db2 JDBC driver (
MongoDB
- Currently, you cannot use the transaction metadata feature of the Debezium MongoDB connector with MongoDB 4.2.
Oracle
-
The Debezium Oracle connector does not include the Oracle JDBC driver (
ojdbc8.jar). See the deployment instructions for information about how to deploy the necessary JDBC driver.
-
The Debezium Oracle connector does not include the Oracle JDBC driver (
PostgreSQL
-
To use the Debezium PostgreSQL connector you must use the
pgoutputlogical decoding output plug-in, which is the default for PostgreSQL versions 10 and later.
-
To use the Debezium PostgreSQL connector you must use the
Additional resources
3.2. Debezium supported configurations
For information about Debezium supported configurations, including information about supported database versions, see the Debezium 1.9.5 Supported configurations page.
3.2.1. AMQ Streams API version
Debezium runs on AMQ Streams 2.1.
AMQ Streams supports the v1beta2 API version, which updates the schemas of the AMQ Streams custom resources. Older API versions are deprecated. After you upgrade to AMQ Streams 1.7, but before you upgrade to AMQ Streams 1.8 or later, you must upgrade your custom resources to use API version v1beta2.
For more information, see the Debezium User Guide.
3.3. Debezium installation options
You can install Debezium with AMQ Streams on OpenShift or on Red Hat Enterprise Linux:
3.4. New Debezium features
Debezium 1.9.5 includes the following updates.
3.4.1. Features promoted to General Availability in Debezium 1.9.5
The following features are promoted from Technology Preview to General Availability in the 2022.Q3 Debezium release:
- Ad hoc and incremental and snapshots
- Provides a mechanism for re-running a snapshot of a table for which you previously captured a snapshot.
- Debezium Oracle connector
The connector for Oracle Database is now fully supported on databases that are configured to use LogMiner. This release of the Debezium Oracle connector includes the following updates:
-
DBZ-3317 The documentation that describes the
decimal.handling.modeproperty is now consistent with similar documentation for other Debezium connectors. - DBZ-4404 The connecter now supports using signals to trigger ad hoc snapshots in Oracle versions earlier than 12c.
- DBZ-4436/ DBZ-4883 The deployment documentation now describes how to obtain the Oracle JDBC driver as a Maven artifact.
- DBZ-4494 Documentation describes how to set database permissions for the Oracle connector LogMiner user.
- DBZ-4536 Increases flexibility of the Oracle connector error handler to retry errors that occur during a mining session.
- DBZ-4595 The Oracle connector now supports 'ROWID' data types.
- DBZ-4963 Introduce log.mining.session.max.ms configuration option for Oracle connector.
- DBZ-5005 When adjusting the LogMiner batch size, the new size is now based on the current batch size, and not the default size.
- DBZ-5119 Users can configure the connector to emit heartbeat events to ensure that connector offsets remain synchronized in tables where changes do not occur for extended periods.
- DBZ-5225 The LogMiner event SCN is now included in Oracle change event records to prevent situations in which every event that the connector emits during an interval uses the same low-watermark SCN value of the oldest in-progress transaction.
- DBZ-5256 To prevent failures when an improperly deleted archive log cannot be found during connector startup, during the snapshot phase, the Oracle connector no longer defaults to checking the progress of incomplete transactions.
-
DBZ-5399 Update documentation for the
signal.data.collectionproperty to specify that in a pluggable database environment the value of the property must be set to the name of the root database.
For a list of Oracle connector features that were introduced during the Technology Preview, see the 2022.Q1 Release notes
-
DBZ-3317 The documentation that describes the
- Outbox event router
- A single message transform (SMT) that supports the outbox pattern for safely and reliably exchanging data between multiple (micro) services.
- Sending signals to a Debezium connector
- The Integration signaling mechanism provides a way to modify the behavior of a connector, or to trigger the connector to perform a one-time action, such as initiating an ad hoc incremental snapshot of a table.
3.4.2. Other recent Debezium feature updates
- Updates in 2022.Q3
The following list contains information about changes included in the 2022.Q3 Debezium release:
- DBZ-3762 By default, the MySQL connector no longer propagates in-line comments in an event DDL to the database history.
- https://issues.redhat.com/browse/DBZ-4351[DBZ-4351 Adds metrics to monitor number of DML create/update/delete events connector emits since last start
- DBZ-4415 Removes support for using the MongoDB connector with MongoDB 5 or greater in oplog mode
- DBZ-4451 Connectors can now correctly recover the history of renamed tables.
- DBZ-4472 Connectors logs now record information about an event’s source partition.
- DBZ-4478 Metrics for a connector can now be retrieved from multiple partitions.
-
DBZ-4518 You can now configure the KAFKA_QUERY_TIMEOUT by setting the
database.history.kafka.query.timeout.msproperty. - DBZ-4541 The MySQL and Oracle connectors must now successfully register JMX metrics before they can start.
- DBZ-4547 The MySQL connector can now successfully create its history topic in a SaaS environment.
- DBZ-4600 When the MongoDB connector is used with the outbox event router, you can now configure it to decode binary payloads.
- DBZ-4730 When a connector is configured to decimal string mode, it now expects plain string values instead of scientific exponential notation.
- DBZ-4809 Adds a task id and partition to the logging context for multi-partition connectors.
- DBZ-4823 The MySQL connector no longer logs a null value for the tableId of excluded tables.
- DBZ-4832 The MySQL connector now longer obtains truststore and keystore parameters from system variables.
-
DBZ-4834 Incremental snapshots now correctly include tables that are added to the
includelist. - DBZ-4861 The PostgreSQL connector now provides schema information when logging snapshot events.
- DBZ-4948 The PostgreSQL connector now retries connections that close as a result of a network exception.
3.4.3. Debezium 2022.Q1 updates
For a list of features that were included in the previous Debezium release, see the 2022.Q1 Release notes.
3.5. Technology Preview features
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments. Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see Technology Preview Features Support Scope.
Debezium includes the following Technology Preview features:
- Ad hoc and incremental snapshots for MongoDB connector
- Provides a mechanism for re-running a snapshot of a table for which you previously captured a snapshot.
- CloudEvents converter
-
Emits change event records that conform to the CloudEvents specification. The CloudEvents change event envelope can be JSON or Avro and each envelope type supports JSON or Avro as the
dataformat. The CloudEvents change event envelope supports Avro encoding change event envelope can be JSON or Avro and each envelope type supports JSON or Avro as thedataformat. - Content-based routing
- Provides a mechanism for rerouting selected events to specific topics, based on the event content.
- Custom-developed converters
- In cases where the default data type conversions do not meet your needs, you can create custom converters to use with a connector.
- Filter SMT
- Enables you to specify a subset of records that you want the connector to send to the broker.
- Signaling for the MongoDB connector
- Provides a mechanism for modifying the behavior of a connector, or triggering a one-time action, such as initiating an ad hoc snapshot of a table.
- Use of the
BLOB,CLOB, andNCLOBdata types with the Oracle connector - The Oracle connector can consume Oracle large object types.
3.6. Deprecated Debezium features
- PostgreSQL
truncate.handling.modeproperty -
The
truncate.handling.modeproperty for the Debezium PostgreSQL connector is deprecated in this release and is scheduled for removal in a future release (DBZ-4419). Use theskipped.operationsproperty in its place. - MonitoredTables option for connector snapshot and streaming metrics
-
The
MonitoredTablesoption for Debezium connector metrics is deprecated in this release and scheduled for removal in a future release. Use theCapturedTablesmetric in its place.
Chapter 4. Camel K release notes
Camel K is a lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift. Camel K is specifically designed for serverless and microservice architectures. You can use Camel K to instantly run integration code written in Camel Domain Specific Language (DSL) directly on OpenShift.
Using Camel K with OpenShift Serverless and Knative, containers are automatically created only as needed and are autoscaled under load up and down to zero. This removes the overhead of server provisioning and maintenance and enables you to focus instead on application development.
Using Camel K with OpenShift Serverless and Knative Eventing, you can manage how components in your system communicate in an event-driven architecture for serverless applications. This provides flexibility and creates efficiencies using a publish/subscribe or event-streaming model with decoupled relationships between event producers and consumers.
4.1. Camel K features
The Camel K provides cloud-native integration with the following main features:
- Knative Serving for autoscaling and scale-to-zero
- Knative Eventing for event-driven architectures
- Performance optimizations using Quarkus runtime by default
- Camel integrations written in Java or YAML DSL
- Monitoring of integrations using Prometheus in OpenShift
- Quickstart tutorials
- Kamelet Catalog for connectors to external systems such as AWS, Jira, and Salesforce
- Support for Timer and Log Kamelets
- Metering for Camel K Operator and pods
- Support for IBM MQ connector
- Support for Oracle 19 database
4.2. Supported Configurations
For information about Camel K supported configurations, standards, and components, see the following Customer Portal articles:
4.2.1. Camel K Operator metadata
The Camel K includes updated Operator metadata used to install Camel K from the OpenShift OperatorHub. This Operator metadata includes the Operator bundle format for release packaging, which is designed for use with OpenShift Container Platform 4.6 or later.
Additional resources
4.3. Important notes
Important notes for the Red Hat Integration - Camel K release:
- Support to run Camel K on ROSA
- Camel K is now supported to run on Red Hat OpenShift Service on AWS (ROSA).
- Support for IBM MQ source connector in Camel K
- IBM MQ source connector kamelet is added to latest Camel K.
- Support for Oracle 19
- Oracle 19 is now supported in Camel K. Refer Supported configurations page for more information.
- Using Camel K CLI commands on Windows machine
-
When using kamel cli commands on Windows machine, the path in the
resourceoption in the command must use linux format. For example,
//Windows path kamel run file.groovy --dev --resource file:C:\user\folder\tempfile@/tmp/file.txt //Must be converted to kamel run file.groovy --dev --resource file:C:/user/folder/tempfile@/tmp/file.txt
- Red Hat Integration - Camel K Operator image size is increased
- Since Red Hat Integration - Camel K 1.8.1.redhat-00024, the size of the Camel K Operator image is doubled.
- Accepted Camel case notations in YAML DSL
-
Since Red Hat Integration - Camel K 1.8.1.redhat-00024, the YAML DSL will accept camel case notation (i.e,
setBody) as well as snake case (i.eset-body). Please note that there are some differences in the syntax as schema is subject to changes within Camel versions.
4.4. Supported Camel Quarkus extensions
This section lists the Camel Quarkus extensions that are supported for this release of Camel K (only when used inside a Camel K application).
These Camel Quarkus extensions are supported only when used inside a Camel K application. These Camel Quarkus extensions are not supported for use in standalone mode (without Camel K).
4.4.1. Supported Camel Quarkus connector extensions
The following table shows the Camel Quarkus connector extensions that are supported for this release of Camel K (only when used inside a Camel K application).
| Name | Package |
|---|---|
| AWS 2 Kinesis |
|
| AWS 2 Lambda |
|
| AWS 2 S3 Storage Service |
|
| AWS 2 Simple Notification System (SNS) |
|
| AWS 2 Simple Queue Service (SQS) |
|
| Cassandra CQL |
|
| File |
|
| FTP |
|
| FTPS |
|
| SFTP |
|
| HTTP |
|
| JMS |
|
| Kafka |
|
| Kamelets |
|
| Metrics |
|
| MongoDB |
|
| Salesforce |
|
| SQL |
|
| Timer |
|
4.4.2. Supported Camel Quarkus dataformat extensions
The following table shows the Camel Quarkus dataformat extensions that are supported for this release of Camel K (only when used inside a Camel K application).
| Name | Package |
|---|---|
| Avro |
|
| Bindy (for CSV) |
|
| Gson |
|
| JSON Jackson |
|
| Jackson Avro |
|
4.4.3. Supported Camel Quarkus language extensions
In this release, Camel K supports the following Camel Quarkus language extensions (for use in Camel expressions and predicates):
- Constant
- ExchangeProperty
- File
- Header
- Ref
- Simple
- Tokenize
- JsonPath
4.4.4. Supported Camel K traits
In this release, Camel K supports the following Camel K traits.
- Builder trait
- Camel trait
- Container trait
- Dependencies trait
- Deployer trait
- Deployment trait
- Environment trait
- Jvm trait
- Kamelets trait
- Owner trait
- Platform trait
- Pull Secret trait
- Prometheus trait
- Quarkus trait
- Route trait
- Service trait
- Error Handler trait
4.5. Supported Kamelets
The following table lists the kamelets that are provided as OpenShift resources when you install the Camel K operator.
For details about these kamelets, go to: https://github.com/openshift-integration/kamelet-catalog/tree/kamelet-catalog-1.8
For information about how to use kamelets to connect applications and services, see https://access.redhat.com/documentation/en-us/red_hat_integration/2022.q3/html-single/integrating_applications_with_kamelets.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
Table 4.1. Kamelets provided with the Camel K operator
| Kamelet | File name | Type (Sink, Source, Action) |
|---|---|---|
| Ceph sink |
| Sink |
| Ceph Source |
| Source |
| Jira Add Comment sink |
| Sink |
| Jira Add Issue sink |
| Sink |
| Jira Transition Issue sink |
| Sink |
| Jira Update Issue sink |
| Sink |
| Avro Deserialize action |
| Action (data conversion) |
| Avro Serialize action |
| Action (data conversion) |
| AWS DynamoDB sink |
| Sink |
| AWS Redshift sink |
| Sink |
| AWS 2 Kinesis sink |
| Sink |
| AWS 2 Kinesis source |
| Source |
| AWS 2 Lambda sink |
| Sink |
| AWS 2 Simple Notification System sink |
| Sink |
| AWS 2 Simple Queue Service sink |
| Sink |
| AWS 2 Simple Queue Service source |
| Source |
| AWS 2 Simple Queue Service FIFO sink |
| Sink |
| AWS 2 S3 sink |
| Sink |
| AWS 2 S3 source |
| Source |
| AWS 2 S3 Streaming Upload sink |
| Sink |
| Azure Storage Blob Source (Technology Preview) |
| Source |
| Azure Storage Blob Sink (Technology Preview) |
| Sink |
| Azure Storage Queue Source (Technology Preview) |
| Source |
| Azure Storage Queue Sink (Technology Preview) |
| Sink |
| Cassandra sink |
| Sink |
| Cassandra source |
| Source |
| Extract Field action |
| Action |
| FTP sink |
| Sink |
| FTP source |
| Source |
| Has Header Key Filter action |
| Action (data transformation) |
| Hoist Field action |
| Action |
| HTTP sink |
| Sink |
| Insert Field action |
| Action (data transformation) |
| Insert Header action |
| Action (data transformation) |
| Is Tombstone Filter action |
| Action (data transformation) |
| Jira source |
| Source |
| JMS sink |
| Sink |
| JMS source |
| Source |
| JMS IBM MQ sink |
| Sink |
| JMS IBM MQ source |
| Source |
| JSON Deserialize action |
| Action (data conversion) |
| JSON Serialize action |
| Action (data conversion) |
| Kafka sink |
| Sink |
| Kafka source |
| Source |
| Kafka Topic Name Filter action |
| Action (data transformation) |
| Log sink (for development and testing purposes) |
| Sink |
| MariaDB sink |
| Sink |
| Mask Fields action |
| Action (data transformation) |
| Message TimeStamp Router action |
| Action (router) |
| MongoDB sink |
| Sink |
| MongoDB source |
| Source |
| MySQL sink |
| Sink |
| PostgreSQL sink |
| Sink |
| Predicate filter action |
| Action (router/filter) |
| Protobuf Deserialize action |
| Action (data conversion) |
| Protobuf Serialize action |
| Action (data conversion) |
| Regex Router action |
| Action (router) |
| Replace Field action |
| Action |
| Salesforce Create |
| Sink |
| Salesforce Delete |
| Sink |
| Salesforce Update |
| Sink |
| SFTP sink |
| Sink |
| SFTP source |
| Source |
| Slack source |
| Source |
| SQL Server Database sink |
| Sink |
| Telegram source |
| Source |
| Throttle action |
| Action |
| Timer source (for development and testing purposes) |
| Source |
| TimeStamp Router action |
| Action (router) |
| Value to Key action |
| Action (data transformation) |
4.6. Camel K known issues
The following known issues apply to the Camel K:
ENTESB-15306 - CRD conflicts between Camel K and Fuse Online
If an older version of Camel K has ever been installed in the same OpenShift cluster, installing Camel K from the OperatorHub fails due to conflicts with custom resource definitions. For example, this includes older versions of Camel K previously available in Fuse Online.
For a workaround, you can install Camel K in a different OpenShift cluster, or enter the following command before installing Camel K:
$ oc get crds -l app=camel-k -o json | oc delete -f -
ENTESB-15858 - Added ability to package and run Camel integrations locally or as container images
Packaging and running Camel integrations locally or as container images is not currently included in the Camel K and has community-only support.
For more details, see the Apache Camel K community.
ENTESB-16477 - Unable to download jira client dependency with productized build
When using Camel K operator, the integration is unable to find dependencies for jira client. The work around is to add the atlassian repo manually.
apiVersion: camel.apache.org/v1
kind: IntegrationPlatform
metadata:
labels:
app: camel-k
name: camel-k
spec:
configuration:
- type: repository
value: <atlassian repo here>ENTESB-17033 - Camel-K ElasticsearchComponent options ignored
When configuring the Elasticsearch component, the Camel K ElasticsearchComponent options are ignored. The work around is to add getContext().setAutowiredEnabled(false) when using the Elasticsearch component.
ENTESB-17061 - Can’t run mongo-db-source kamelet route with non-admin user - Failed to start route mongodb-source-1 because of null
It is not possible to run mongo-db-source kamelet route with non-admin user credentials. Some part of the component require admin credentials hence it is not possible run the route as a non-admin user.
4.7. Camel K Fixed Issues
The following sections list the issues that have been fixed in Red Hat Integration - Camel K 1.8.1.redhat-00024.
4.7.1. Feature requests in Camel K 1.8.1.redhat-00024
The following table lists the feature requests in Camel K 1.8.1.redhat-00024.
Table 4.2. Camel K 1.8.1.redhat-00024 Enhancements
| Issue | Description |
|---|---|
| Camel K - Kamelet for Red Hat Ceph Storage Object (S3) | |
| Create Jira Sink kamelets |
4.7.2. Enhancements in Camel K 1.8.1.redhat-00024
The following table lists the enhancements in Camel K 1.8.1.redhat-00024.
Table 4.3. Camel K 1.8.1.redhat-00024 Enhancements
| Issue | Description |
|---|---|
| Synchronize Camel K productization version with the actual product version | |
| Consider migration of Camel-K operator to ALL-NAMESPACES | |
| Publish operator to 1.x and 1.6.x channel moving forward | |
| Add unit tests for camel-kamelet-utils |
4.7.3. Bugs resolved in Camel K 1.8.1.redhat-00024
The following table lists the resolved bugs in Camel K 1.8.1.redhat-00024.
Table 4.4. Camel K 1.8.1.redhat-00024 Resolved Bugs
| Issue | Description |
|---|---|
| CVE-2021-22135 elasticsearch: Document disclosure flaw in the Elasticsearch suggester [rhint-camel-k-1] | |
| CVE-2021-29427 gradle: repository content filters do not work in Settings pluginManagement [rhint-camel-k-1] | |
| CVE-2021-28169 jetty: requests to the ConcatServlet and WelcomeFilter are able to access protected resources within the WEB-INF directory [rhint-camel-k-1] | |
| CVE-2021-34428 jetty: SessionListener can prevent a session from being invalidated breaking logout [rhint-camel-k-1] | |
| CVE-2021-3642 wildfly-elytron: possible timing attack in ScramServer [rhint-camel-k-1] | |
| Kamelet has-header-filter-action differs in upstream. | |
| Camel K images don't create PST manifest for golang code | |
| Camel K 1.6.0 Sources contain multiple defects (tech-preview + wrong versions) | |
| AWS Redshift Sink kamelet dependency mvn:com.amazon.redshift:redshift-jdbc42:2.1.0.5 missing in bom | |
| Salesforce Source kamelet: notifyForFields isnt very usable | |
| AWS S3 Streaming upload kamelet: streamingUploadMode probably shouldnt be configurable | |
| CVE-2022-30973 tika-core: incomplete fix for CVE-2022-30126 [rhint-camel-k-1] | |
| Upgrade test to channel 1.8.x fails with "2 replacement chains were found" | |
| Action kamelet tests fail on Unsupported field: unmarshalTypeName | |
| Default maven repositories are overridden when using kamel install | |
| Camel-K 1.8.1.CK1 unaligned versions | |
| Cannot run kameletBinding with the native mode | |
| Wrong snakeyaml dependency in MRRC | |
| Elasticsearch source Kamelet would not work with basic authentication |
Chapter 5. Camel Spring Boot release notes
5.1. Camel Spring Boot features
This release of Camel Spring Boot introduces Camel support for Spring Boot which provides auto-configuration of the Camel and starters for many Camel components. The opinionated auto-configuration of the Camel context auto-detects Camel routes available in the Spring context and registers the key Camel utilities (like producer template, consumer template and the type converter) as beans.
5.2. Supported platforms, configurations, databases, and extensions
- For information about supported platforms, configurations, and databases in Camel Spring Boot version 3.14, see the Supported Configuration page on the Customer Portal (login required).
- For a list of Red Hat Camel Spring Boot extensions, see the Camel Spring Boot Reference. (login required).
5.3. Important notes
Documentation for Camel Spring Boot component is available in the Camel Spring Boot Reference. Documentation for additional Camel Spring Boot components will be added to this reference guide.
5.4. Additional resources
Chapter 6. Service Registry release notes
Service Registry 2.3 is provided as a General Availability release. Service Registry is a datastore for standard event schemas and API designs, and is based on the Apicurio Registry open source community project.
You can use Service Registry to manage and share the structure of your data using a web console, REST API, Maven plug-in, or Java client. For example, client applications can dynamically push or pull the latest schema updates to or from Service Registry without needing to redeploy. You can also use Service Registry to create optional rules to govern how registry content evolves over time. These rules include content validation and backwards or forwards compatibility of schema or API versions.
6.1. Service Registry installation options
You can install Service Registry on OpenShift with either of the following data storage options:
- PostgreSQL database
- AMQ Streams
For more details, see Installing and deploying Service Registry on OpenShift.
6.2. Service Registry platform component versions
Service Registry 2.3 supports the following versions:
- OpenShift Container Platform 4.6 - 4.11
- PostgreSQL 12 - 15
- AMQ Streams 2.1, 2.2
- Red Hat Single Sign-On (RH-SSO) 7.6
- OpenJDK 11
6.3. Service Registry new features
- Service Registry authentication and authorization
- Expanded role-based authorization - you can now configure role-based authorization in Service Registry, as well as in RH-SSO as previously. If role-based authorization is enabled in the Service Registry application, you can use the web console or REST API to control access.
- Expanded owner-based authorization - you can now enable the owner-based authorization option at the artifact-group level, as well as at the artifact level as previously.
- Anonymous read access - when the anonymous read access option is enabled, unauthenticated (anonymous) users have read-only access to all artifacts.
- Authenticated read access - when the authenticated read access option is enabled, any authenticated user has read-only access to all artifacts, even if the user has not been granted any Service Registry roles.
- HTTP basic authentication - when this option is enabled, users or client applications can use HTTP basic authentication to access Service Registry.
- Custom TLS certificate for Kafka storage - when using Kafka for storage, users can now securely connect to Kafka using a custom TLS certificate.
- Change artifact owner - administrators or artifact owners can change the owner of a specific schema or API artifact by using the REST API or web console.
- Operational and monitoring improvements
- Audit logging - any changes to Service Registry data result in an audit log entry.
- Prometheus metrics - metrics are exposed in Prometheus format for use in monitoring.
- Sentry integration - optional integration with Sentry 1.x.
- Operator improvements
-
Custom environment variables - you can now set arbitrary environment variables in the
ApicurioRegistrycustom resource. These variables are applied to Service Registry using theDeploymentresource. - Support for PodDisruptionBudget - This resource is automatically created to ensure that at most one replica is unavailable.
- Support for NetworkPolicy - the Service Registry Operator creates an ingress network policy for port 8080.
-
Custom environment variables - you can now set arbitrary environment variables in the
- Artifact references
- Artifacts can now reference other artifacts in Service Registry. Many supported artifact types allow references from one file to another. For example, an OpenAPI file might have a data type with a property that references a JSON schema defined in another file. Typically, these references have a syntax specific to the artifact type. You can now use the REST API to create mappings so that type-specific references can be resolved to artifacts registered in Service Registry.
- Dynamic global configuration of Service Registry instances
- Service Registry has many global configuration options that are typically set at deployment time. A subset of these options are now also configurable at runtime for a Service Registry instance. You can manage these options at runtime by using the REST API or web console. For example, these options include owner-based authorization, anonymous read access, and authenticated read access.
- Upload artifact from URL
- You can now upload a schema or API artifact from a URL, in addition to the already supported upload from a file. You can upload by using the Service Registry web console or the REST API.
- Web console improvements
-
Import and export of Service Registry data - admin users can now use the web console to export all Service Registry data in a
.zipfile, as well as using the REST API as previously. They can then import this.zipfile into a different Service Registry deployment. - Full support for artifact properties - artifacts in Service Registry can have user-defined and editable metadata such as name, description, labels (simple keyword list), and properties (name/value pairs). The web console has been enhanced to support displaying and editing properties, in addition to using the REST API as previously.
- Documentation generation for AsyncAPI artifacts - AsyncAPI artifacts now support the Documentation tab on the artifact details page. This tab displays human-readable documentation generated from the AsyncAPI content. This feature was previously available only for OpenAPI artifacts.
- Option to display JSON as YAML - for artifact types that are JSON formatted, the Content tab on the artifact details page now supports switching between JSON and YAML formats.
-
Import and export of Service Registry data - admin users can now use the web console to export all Service Registry data in a
- REST API improvements
-
Improved /users/me endpoint - the Service Registry core REST API has a
/users/meendpoint that returns information about the current authenticated user. You can use this endpoint to inspect a user’s assigned role and determine their capabilities. - Updated support for Confluent Compatibility API - Service Registry now supports the Confluent Schema Registry API version 6.
-
Improved /users/me endpoint - the Service Registry core REST API has a
Service Registry user documentation and examples
The documentation library has been updated with the new features available in version 2.3:
The open source demonstration applications have also been updated:
6.4. Service Registry deprecated features
- Service Registry version 1.x
- Service Registry version 1.x was deprecated in version 2.0 and is no longer fully supported. For more details, see the Red Hat Application Services Product Update and Support Policy.
6.5. Upgrading and migrating Service Registry deployments
6.5.1. Upgrading a Service Registry 2.0 deployment on OpenShift
You can upgrade from Service Registry 2.0.3 on OpenShift 4.9 to Service Registry 2.3.0 on OpenShift 4.11. You must upgrade both your Service Registry and your OpenShift versions, and upgrade OpenShift one minor version at a time.
Prerequisites
- You already have Service Registry 2.0.3 installed on OpenShift 4.9.
Procedure
- In the OpenShift Container Platform web console, click Administration and then Cluster Settings.
-
Click the pencil icon next to the Channel field, and select the next minor
candidateversion (for example, change fromstable-4.9tocandidate-4.10). - Click Save and then Update, and wait until the upgrade is complete.
-
If the OpenShift version is less than 4.11, repeat steps 2 and 3, and select
candidate-4.11. - Click Operators > Installed Operators > Red Hat Integration - Service Registry.
-
Ensure that the Update channel is set to
2.x. -
If the Update approval is set to Automatic, the upgrade should be approved and installed immediately after the
2.xchannel is set. - If the Update approval is set to Manual, click Install.
- Wait until the Operator is deployed and the Service Registry pod is deployed.
- Verify that your Service Registry system is up and running.
Additional resources
- For more details on how to set the Operator update channel in the OpenShift Container Platform web console, see Changing the update channel for an Operator.
6.5.2. Migrating a Service Registry 1.1 deployment on OpenShift
For details on migrating a Service Registry version 1.1 deployment to version 2.x, see Migrating Service Registry deployments.
6.6. Service Registry resolved issues
Table 6.1. Service Registry core resolved issues
| Issue | Description |
|---|---|
| REST API endpoint for core v1 compatibility not properly protected by authentication. | |
| Web console incorrectly redirects to HTTP instead of HTTPS. | |
|
Service Registry throws | |
| Confluent compatibility layer not working with JSON Schema artifacts. | |
|
| |
| Confluent compatibility layer’s schema DTO is not fully compatible. | |
| Web console does not properly obey the disable roles feature. | |
| Confluent compatibility API v6 does not return artifact. | |
|
Passing | |
|
Web console displays inconsistent | |
|
Global compatibility rule execution broken for | |
| Transitive compatibility rules might give false positives. |
6.7. Service Registry resolved CVEs
Table 6.2. Service Registry resolved Common Vulnerabilities and Exposures (CVEs)
| Issue | Description |
|---|---|
| CVE-2022-25858 terser: Insecure use of regular expressions leads to ReDoS. | |
| CVE-2022-37734 graphql-java: DoS by malicious query. | |
| CVE-2022-25857 snakeyaml: DoS due to missing nested depth limitation for collections. | |
| CVE-2022-31129 moment: Inefficient parsing algorithm resulting in DoS. | |
| CVE-2022-25647 com.google.code.gson-gson: Deserialization of untrusted data. | |
|
CVE-2022-24773 node-forge: Signature verification leniency in checking | |
| CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery. | |
|
CVE-2022-24771 node-forge: Signature verification leniency in checking | |
| CVE-2022-26520 jdbc-postgresql: Arbitrary file write vulnerability. | |
|
CVE-2022-0536 follow-redirects: Exposure of sensitive information by | |
| CVE-2022-0235 node-fetch: Exposure of sensitive information to an unauthorized actor. | |
| CVE-2022-23647 prismjs: Improperly escaped output allows an XSS vulnerability. | |
| CVE-2022-0981 quarkus: Privilege escalation vulnerability with RestEasy Reactive scope leakage in Quarkus. | |
| CVE-2022-21724 quarkus-jdbc-postgresql-deployment: Unchecked class instantiation when providing plug-in classes. | |
| CVE-2021-22569 protobuf-java: Potential DoS in the parsing procedure for binary data. | |
| CVE-2021-41269 cron-utils: Template injection leading to unauthenticated Remote Code Execution vulnerability. | |
|
CVE-2021-37136 netty-codec: | |
|
CVE-2021-37137 netty-codec: |
6.8. Service Registry known issues
Service Registry core known issues
Registry-2991 - On slow machines, kafkasql storage is not ready for existing messages
When Service Registry is deployed with Kafka storage on slower compute nodes, the internal H2 database might not be ready when the Kafka consumer thread starts. In this case, Service Registry might consume existing messages from the Kafka topic before the internal database is ready to store the messages. This discrepancy results in a stack trace, and the Service Registry node might fail to start.
The workaround is to increase the startup delay of the Kafka message consumer thread in Service Registry. You can increase the value of the REGISTRY_KAFKASQL_CONSUMER_STARTUPLAG environment variable, which is set to 100 ms by default.
IPT-814 - Service Registry logout feature incompatible with RH-SSO 7.6
In RH-SSO 7.6, the redirect_uri parameter used with the logout endpoint is deprecated. For more details, see the RH-SSO 7.6 Upgrading Guide. Because of this deprecation, when Service Registry is secured by using the RH-SSO Operator, clicking the Logout button displays the Invalid parameter: redirect_uri error.
For a workaround, see https://access.redhat.com/solutions/6980926.
IPT-701 - CVE-2022-23221 H2 allows loading custom classes from remote servers through JNDI
When Service Registry data is stored in AMQ Streams, the H2 database console allows remote attackers to execute arbitrary code by using the JDBC URL. Service Registry is not vulnerable by default and a malicious configuration change is required.
Service Registry Operator known issues
Operator-42 - Autogeneration of OpenShift route might use wrong base host value
If multiple routerCanonicalHostname values are specified, autogeneration of the Service Registry OpenShift route might use a wrong base host value.
Chapter 7. Red Hat Integration Operators
Red Hat Integration 2022.q3 introduces Red Hat Integration Operator 1.3.
Red Hat Integration provides Operators to automate the deployment of Red Hat Integration components on OpenShift. You can use Red Hat Integration Operator to manage those Operators.
Alternatively, you can manage each component Operator individually. This section introduces Operators and provides links to detailed information on how to use Operators to deploy Red Hat Integration components.
7.1. What Operators are
Operators are a method of packaging, deploying, and managing a Kubernetes application. They take human operational knowledge and encode it into software that is more easily shared with consumers to automate common or complex tasks.
In OpenShift Container Platform 4.x, the Operator Lifecycle Manager (OLM) helps users install, update, and generally manage the life cycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes native applications (Operators) in an effective, automated, and scalable way.
The OLM runs by default in OpenShift Container Platform 4.x, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.
OperatorHub is the graphical interface that OpenShift cluster administrators use to discover, install, and upgrade Operators. With one click, these Operators can be pulled from OperatorHub, installed on the cluster, and managed by the OLM, ready for engineering teams to self-service manage the software in development, test, and production environments.
Additional resources
- For more information about Operators, see the OpenShift documentation.
7.2. Red Hat Integration component Operators
You can install and upgrade each Red Hat Integration component Operator individually, for example, using the 3scale Operator, the Camel K Operator, and so on.
7.2.1. 3scale Operators
7.2.2. AMQ Operators
7.2.3. Camel K Operator
7.2.4. Fuse Operators
7.2.5. Service Registry Operator
7.3. Red Hat Integration Operator (deprecated)
You can use Red Hat Integration Operator 1.3 to install and upgrade multiple Red Hat Integration component Operators:
- 3scale
- 3scale APIcast
- AMQ Broker
- AMQ Interconnect
- AMQ Streams
- API Designer
- Camel K
- Fuse Console
- Fuse Online
- Service Registry
The Red Hat Integration Operator has been deprecated and will be removed in the future. It will be available from the OperatorHub in OpenShift 4.6 to 4.10. The individual Red Hat Integration component Operators will continue to be supported, which you can install separately.
7.3.1. Supported components
Before installing the Operators using Red Hat Integration Operator 1.3, check the updates in the Release Notes of the components. The Release Notes for the supported version describe any additional upgrade requirements.
- Release Notes for Red Hat 3scale API Management 2.10 On-premises
- Release Notes for Red Hat AMQ Broker 7.8
- Release Notes for Red Hat AMQ Interconnect 1.10
- Release Notes for Red Hat AMQ Streams 2.0 on OpenShift
- Release Notes for Red Hat Fuse 7.10 (Fuse and API Designer)
- Release Notes for Red Hat Integration 2021.Q3 (Service Registry 2.0 release notes)
- Release Notes for Red Hat Integration 2021.Q4 (Camel K release notes)
AMQ Streams new API version
Red Hat Integration Operator 1.3 installs the Operator for AMQ Streams 2.0.
You must upgrade your custom resources to use API version v1beta2 before upgrading to AMQ Streams version 1.8 or later.
AMQ Streams 1.7 introduced the v1beta2 API version, which updates the schemas of the AMQ Streams custom resources. Older API versions are now deprecated. After you have upgraded to AMQ Streams 1.7, and before you upgrade to AMQ Streams 2.0, you must upgrade your custom resources to use API version v1beta2.
If you are upgrading from an AMQ Streams version prior to version 1.7:
- Upgrade to AMQ Streams 1.7
- Convert the custom resources to v1beta2
- Upgrade to AMQ Streams 2.0
For more information, refer to the following documentation:
Upgrade of the AMQ Streams Operator to version 2.0 will fail in clusters if custom resources and CRDs haven’t been converted to version v1beta2. The upgrade will be stuck on Pending. If this happens, do the following:
- Perform the steps described in the Red Hat Solution, Forever pending cluster operator upgrade.
- Scale the Integration Operator to zero, and then back to one, to trigger an installation of the AMQ Streams 2.0 Operator.
Service Registry 2.0 migration
Red Hat Integration Operator installs Service Registry 2.0.
Service Registry 2.0 does not replace Service Registry 1.x installations, which need to be manually uninstalled.
For information on migrating from Service Registry version 1.x to 2.0, see the Service Registry 2.0 release notes.
7.3.2. Support life cycle
To remain in a supported configuration, you must deploy the latest Red Hat Integration Operator version. Each Red Hat Integration Operator release version is only supported for 3 months.
7.3.3. Fixed issues
There are no fixed issues for Red Hat Integration Operator 1.3.
Additional resources
- For more details on managing multiple Red Hat Integration component Operators, see Installing the Red Hat Integration Operator on OpenShift.