Introduction to Red Hat OpenShift Connectors

  • Red Hat OpenShift Connectors 1
  • Updated 24 March 2023
  • Published 20 January 2023

Topics included in this guide

Introduction to Red Hat OpenShift Connectors

Red Hat OpenShift Connectors 1
  • Updated 24 March 2023
  • Published 20 January 2023

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright’s message.

What is OpenShift Connectors?

Red Hat OpenShift Connectors is a user-friendly way to quickly configure communication between OpenShift Streams for Apache Kafka instances and external services and applications. Red Hat OpenShift Connectors allow you to configure how data moves from one endpoint to another without writing a single line of code.

Figure 1 illustrates how data flows from a data source through a data source connector to a Kafka topic. And how data flows from a Kafka topic to a data sink through a data sink connector.

Image of data flowing from a data source to a data sink
Figure 1. Red Hat OpenShift Connectors data flow

Red Hat OpenShift Connectors provides access to more than 60 prebuilt connectors (based on the open source community projects Red Hat Camel K and Red Hat Debezium). This ecosystem of prebuilt connectors provides an efficient way for developers to create integration patterns and connectivity within application infrastructure products.

Understanding Connectors

To understand Connectors, it’s important to understand the key concepts described here.

Source connector
A connector that ingests data from another system into a Kafka instance (a Kafka producer).
Sink connector
A connector that sends data from a Kafka instance into another system (a Kafka consumer).
Connectors instance
An instance of a connector that you create by providing configuration information specific to your use case. For example, to create an instance of an HTTP sink connector, you provide configuration information, such as a specific web site URL.
Connectors namespace
The hosting space for your deployed Connectors instances.
Red Hat OpenShift Streams for Apache Kafka
A cloud service that simplifies the process of running Apache Kafka. Apache Kafka is an open-source, distributed, publish-subscribe messaging system for creating fault-tolerant, real-time data feeds.
Kafka topic
A topic that provides a destination for the storage of data.
Red Hat Camel K
A lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift.
Red Hat Debezium
A distributed platform that converts information from your existing databases into event streams, enabling applications to detect, and immediately respond to row-level changes in the databases.

How to use OpenShift Connectors

As a developer of applications and services, you configure Connectors to create connections between OpenShift Streams for Apache Kafka and third-party systems. You connect data sources that create Kafka messages and data sinks that consume the Kafka messages.

Before you can use Connectors , you must configure Streams for Apache Kafka as described in Getting started with Red Hat OpenShift Streams for Apache Kafka.

The following list of tasks provides an overview of how to configure Connectors. For detailed steps, see the quick start or Getting started with Red Hat OpenShift Connectors.

  1. Set up an OpenShift Streams for Apache Kafka instance (including access permissions for your service account) and create one or more Kafka topics to use for Connectors .

  2. Determine which connectors you want to use in your integration application. The OpenShift Connectors catalog provides over 60 pre-built connectors that you can choose from. You can filter by type (sink or source) or you can search by name.

  3. Create instances of one or more source connectors and/or sink connectors by following the steps in the Connectors wizard. The wizard prompts you to provide values for configuration parameters and to select a message error handling policy.

  4. Verify that messages are being sent by the source Connectors instances to Kafka and received by the sink Connectors instances from Kafka.

Get OpenShift Connectors

If you have a Red Hat OpenShift Service for AWS cluster, you can purchase Red Hat OpenShift Connectors as a prepaid subscription.

To learn more about this option, go to the Red Hat OpenShift Connectors product page and then click Talk to a Red Hatter.

Try OpenShift Connectors

You can also try OpenShift Connectors at no-cost. How you try OpenShift Connectors depends on whether you have access to your own OpenShift environment and for how long you want to try out OpenShift Connectors.

  • The hosted preview environment

    • The Connectors instances are hosted on a multitenant OpenShift cluster that is owned by Red Hat.

    • You can create four Connectors instances at a time.

    • The preview environment applies 48-hour expiration windows, as described in Red Hat OpenShift Connectors Preview guidelines.

  • Your own Red Hat OpenShift Dedicated Trial environment