As a developer of applications and services, you might need to connect client applications to instances in cloud services such as Red Hat OpenShift Streams for Apache Kafka and Red Hat OpenShift Service Registry.
For example, suppose you have the following applications running locally on your computer:
-
One application that is designed to publish a stream of stock price updates
-
A second application that is designed to consume the stock price updates and chart them on a dashboard
In addition, suppose you have the following instances in Red Hat OpenShift Application Services:
-
A Kafka instance in OpenShift Streams for Apache Kafka
-
A Service Registry instance in OpenShift Service Registry
Each time the first application produces a stock price update, you want to use the Kafka instance to forward the update as an event to the second, consuming application. In addition, you want each of the applications to use a schema in the Service Registry instance to validate that messages conform to a particular format.
In this scenario, you need to connect your applications to your Kafka and Service Registry instances. To achieve this, you can use the Red Hat OpenShift Application Services (rhoas
) command-line interface (CLI) to create a context for your service instances. You can then use another CLI command to generate the required connection information for each service instance in the context.
This guide describes how to use the rhoas
CLI to create contexts and then generate the connection configuration information that client applications need to connect to service instances in those contexts.
About service contexts
In Red Hat OpenShift Application Services, a service context is a defined set of instances running in cloud services such as Red Hat OpenShift Streams for Apache Kafka and Red Hat OpenShift Service Registry. You might create different contexts for specific use cases, projects, or environments.
To create a context, you can use the Red Hat OpenShift Application Services (rhoas
) command-line interface (CLI). New service instances that you create are automatically added to the context that is currently in use. You can switch between different contexts and add or remove service instances as required. You can include the same service instance in multiple contexts.
When you have created a service context, you can use a single CLI command to generate the configuration information that client applications need to connect to the instances in that context. You can generate connection configuration information in various formats such as an environment variables file (.env), a JSON file, a Java properties file, and a Kubernetes ConfigMap.
Creating new service contexts
The following example shows how to use the CLI to create contexts in Red Hat OpenShift Application Services and then manage the service instances that are defined in those contexts.
-
You have installed the Red Hat OpenShift Application Services (
rhoas
) CLI. For more information, see Installing the rhoas CLI.
-
Log in to the CLI.
$ rhoas login
The login command opens a sign-in process in your web browser.
-
Use the CLI to create a new service context.
$ rhoas context create --name development-context
The new context becomes the current (that is, active) context by default.
-
Create a new Kafka instance in Red Hat OpenShift Streams for Apache Kafka.
$ rhoas kafka create --name my-kafka-instance
The CLI automatically adds the Kafka instance to the current context.
-
Create a new Service Registry instance in Red Hat OpenShift Service Registry.
$ rhoas service-registry create --name my-registry-instance
The CLI also automatically adds the Service Registry instance to the current context.
-
Confirm that the Kafka and Service Registry instances are in the current context and are running.
Command to view status of service instances in current context$ rhoas context status
You see output that looks like the following example:
Status of example context with single Kafka and Service Registry instancesService Context Name: development-context Context File Location: /home/<user-name>/.config/rhoas/contexts.json Kafka ----------------------------------------------------------------------------- ID: cafkr2jma40lhulbl1c0 Name: my-kafka-instance Status: ready Bootstrap URL: kafka-inst-cafkr-jma--lhulbl-ca.bf2.kafka.rhcloud.com:443 Service Registry ------------------------------------------------------------------------------ ID: 0aa1dd8b-63d5-466c-9de8-7c03320a81c2 Name: my-registry-instance Status: ready Registry URL: https://bu98.serviceregistry.rhcloud.com/t/0aa1dd8b-63d5-466c-9de8-7c03320a81c2
-
Create another service context.
$ rhoas context create --name production-context
The new service context becomes the current context.
-
Add the Kafka instance that you created earlier in the procedure to the new service context.
$ rhoas context set-kafka --name my-kafka-instance
The Kafka instance is now part of both of the service contexts that you created.
To remove a particular service from a context, use the context unset
command with the --services
flag and specify the service name, for example, kafka
or service-registry
. An example is shown.
$ rhoas context unset --name development-context --services kafka
-
To learn more about the
context
commands that you can use to manage service contexts, see CLI command reference (rhoas)
Generating connection configuration information for a local Quarkus application
The following example shows how to connect an example Quarkus application running locally on your computer to the service instances defined in a context in Red Hat OpenShift Application Services. Quarkus is a Kubernetes-native Java framework that is optimized for serverless, cloud, and Kubernetes environments.
The Quarkus application uses a topic in a Kafka instance to produce and consume a stream of quote values and display these on a web page. The application consists of two components:
-
A producer component that periodically produces a new quote value and publishes this to a Kafka topic called
quotes
. -
A consumer component that streams quote values from the Kafka topic. This component also has a minimal front end that uses server-sent events to show the quote values on a web page.
In addition, the producer and consumer components serialize and deserialize Kafka messages using an Avro schema stored in Service Registry. Use of the schema ensures that message values conform to a defined format.
-
You have a Red Hat account.
-
You have a service context with a Kafka and Service Registry instance.
-
Git is installed.
-
You have an IDE such as IntelliJ IDEA, Eclipse, or Visual Studio Code.
-
OpenJDK 11 or later is installed. (The latest LTS version of OpenJDK is recommended.)
-
Apache Maven 3.8 (or a later Maven 3 release) is installed.
-
On the command line, clone the Red Hat OpenShift Application Services Guides and Samples repository from GitHub.
$ git clone https://github.com/redhat-developer/app-services-guides app-services-guides
-
In your IDE, open the
code-examples/quarkus-service-registry-quickstart
directory from the repository that you cloned.You see that the sample Quarkus application has two components - a producer component and a consumer component. The producer component publishes a stream of quote values to a Kafka topic. The consumer component consumes these values and displays them on a web page.
-
On the command line, create the
quotes
topic required by the Quarkus application.$ rhoas kafka topic create --name quotes
-
Ensure that you are using the service context that includes your Kafka and Service Registry instances, as shown in the following example:
$ rhoas context use --name development-context
-
In the guides and samples repository that you cloned, navigate to the directory for the Quarkus application.
$ cd ~/app-services-guides/code-examples/quarkus-service-registry-quickstart/
-
Create a service account for the Quarkus application to authenticate with the Kafka and Service Registry instances in the context. Save the credentials in an environment variables file in the directory for the producer component.
$ rhoas service-account create --file-format env --output-file ./producer/.env
-
Generate an environment variables file that contains the connection configuration information required by the producer component.
$ rhoas generate-config --type env --output-file ./producer/rhoas.env
-
Append the contents of the connection configuration file to the service account environment variables file.
$ cat ./producer/rhoas.env >> ./producer/.env
-
Copy the updated
.env
file to the directory for the consumer component, as shown in the following Linux example:$ cp ./producer/.env ./consumer/.env
For a service context with single Kafka and Service Registry instances, the
.env
file looks like the following example:Example environment variables file for connection configuration and credentials## Generated by rhoas cli RHOAS_SERVICE_ACCOUNT_CLIENT_ID=<client-id> RHOAS_SERVICE_ACCOUNT_CLIENT_SECRET=<client-secret> RHOAS_SERVICE_ACCOUNT_OAUTH_TOKEN_URL=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token ## Generated by rhoas cli ## Kafka Configuration KAFKA_HOST=kafka-inst-cafkr-jma--lhulbl-ca.bf2.kafka.rhcloud.com:443 ## Service Registry Configuration SERVICE_REGISTRY_URL=https://bu98.serviceregistry.rhcloud.com/t/0aa1dd8b-63d5-466c-9de8-7c03320a81c2 SERVICE_REGISTRY_CORE_PATH=/apis/registry/v2 SERVICE_REGISTRY_COMPAT_PATH=/apis/ccompat/v6
As shown in the example, the file that you generate contains the endpoints for your service instances, and the credentials required to connect to those instances.
-
Set Access Control List (ACL) permissions to enable the new service account to access resources in the Kafka instance.
Example command for granting access to Kafka instance$ rhoas kafka acl grant-access --producer --consumer --service-account <client-id> --topic quotes --group all
The command you entered allows applications to use the service account to produce and consume messages in the
quotes
topic. Applications can use any consumer group. -
Use Role-Based Access Control (RBAC) to enable the new service account to access the Service Registry instance and the artifacts (such as schemas) that it contains.
Example command for granting access to Service Registry instance$ rhoas service-registry role add --role manager --service-account <client-id>
-
In the guides and samples repository, navigate to the directory for the producer component. Use Apache Maven to run the producer component in developer mode.
$ cd ~/app-services-guides/code-examples/quarkus-service-registry-quickstart/producer $ mvn quarkus:dev
The producer component starts to generate quote values to the
quotes
topic in the Kafka instance.The Quarkus application also created an Avro schema called
quotes-value
in the Service Registry instance. The producer and consumer components use the schema to ensure that message values conform to a defined format.To view the contents of the
quotes-value
schema, run the following command:$ rhoas service-registry artifact get --artifact-id quotes-value
You see output that looks like the following example:
Example Avro schema in Service Registry{ "type": "record", "name": "Quote", "namespace": "org.acme.kafka.quarkus", "fields": [ { "name": "id", "type": { "type": "string", "avro.java.string": "String" } }, { "name": "price", "type": "int" } ] }
-
With the producer component still running, open a second command-line window or tab. In the guides and samples repository, navigate to the directory for the consumer component and run the component in developer mode.
$ cd ~/app-services-guides/code-examples/quarkus-service-registry-quickstart/consumer $ mvn quarkus:dev
The consumer component starts to consume the stream of quote values from the
quotes
topic. -
In a web browser, go to http://localhost:8080/quotes.html.
You see that the consumer component displays the stream of quote values on the web page. This output shows that the Quarkus application used the connection configuration information that you generated to connect to the Kafka and Service Registry instances in your service context.
Generating connection configuration information for a Helm-based OpenShift application
The following example shows how to use Helm to deploy an example Quarkus application on Kubernetes and then connect the Quarkus application to a Kafka instance in Red Hat OpenShift Streams for Apache Kafka. Helm is an open source utility for creating, configuring, and deploying applications on Kubernetes. The Kubernetes platform referred to in the remainder of this example is Red Hat OpenShift.
One part of the example Quarkus application is designed to generate random price values and produce these to a Kafka topic. Another part of the application consumes the numbers from the Kafka topic. The application uses server-sent events to expose the values as a REST UI. Finally, a web page in the application displays the exposed values.
To connect the Quarkus application to a Kafka instance in OpenShift Streams for Apache Kafka, you define a service context that includes the Kafka instance. You then generate connection configuration information for the service context as an OpenShift ConfigMap object and provide this to the application.
-
You have a running Kafka instance in OpenShift Streams for Apache Kafka.
-
You have created a service context for your Kafka instance.
-
You have installed the latest version of the
rhoas
command-line interface (CLI). See Installing and configuring the rhoas CLI. -
You have installed version 3.9.0 or greater of the Helm CLI.
-
You have installed version 4.8.5 or greater of the OpenShift CLI.
-
Git is installed.
-
You have an IDE such as IntelliJ IDEA, Eclipse, or Visual Studio Code.
-
On the command line, clone the Red Hat OpenShift Application Services Guides and Samples repository in GitHub.
git clone https://github.com/redhat-developer/app-services-guides app-services-guides
-
Log in to the rhoas CLI.
$ rhoas login
The login command opens a sign-in process in your web browser.
-
Check the service context that is currently in use.
$ rhoas context list
The CLI shows a check mark next to the service context that is currently in use.
-
(Optional) To switch to a different service context that includes your Kafka instance, use the following command. Replace
<service-context>
with your service context name.$ rhoas context use --name <service-context>
-
Generate connection configuration information for the service context as an OpenShift
ConfigMap
and save this in a YAML file, as shown in the following example:Generating ConfigMap from service context$ rhoas generate-config --type configmap --output-file ./rhoas-services.yaml
For a service context with a single Kafka instance, the contents of the
ConfigMap
file look like the following example:Example ConfigMap file generated from service contextapiVersion: v1 kind: ConfigMap metadata: name: <service-context>-configuration data: ## Kafka Configuration kafka_host: <host>:<port>
As indicated in the example, the name of the generated
ConfigMap
object has a format of<service-context>-configuration
. -
On the command line, create the
prices
topic required by the Quarkus application.$ rhoas kafka topic create --name prices
-
Create a service account for the Quarkus application to authenticate with your Kafka instance by performing the following actions.
-
Create a new service account as an OpenShift secret and save this in a YAML file, as shown in the following example:
$ rhoas service-account create --file-format secret --output-file ./rhoas-secrets.yaml
-
Open the YAML file for the secret in your IDE.
The contents of the file look like the following example:
Example secret file for service account credentialsapiVersion: v1 kind: Secret metadata: name: service-account-credentials type: Opaque stringData: RHOAS_SERVICE_ACCOUNT_CLIENT_ID: <client-id> RHOAS_SERVICE_ACCOUNT_CLIENT_SECRET: <client-secret> RHOAS_SERVICE_ACCOUNT_OAUTH_TOKEN_URL: <oauth-token-endpoint-uri>
As indicated in the example, the generated secret has a default name of
service-account-credentials
. -
Set Access Control List (ACL) permissions to enable the new service account to access resources in the Kafka instance.
Example command for granting access to Kafka instance$ rhoas kafka acl grant-access --producer --consumer --service-account <client-id> --topic prices --group all
The command you entered allows applications to use the service account to produce and consume messages in the
prices
topic. Applications can use any consumer group.
-
-
Log in to the OpenShift CLI using a token by performing the following actions.
-
Log in to the OpenShift web console as a user who has privileges to create a new project in the cluster.
-
In the upper-right corner of the console, click your user name and select Copy login command.
A new page opens.
-
Click the Display Token link.
-
In the Log in with this token section, copy the full
oc login
command shown. -
On the command line, right-click and select Paste.
You see output confirming that you are logged in to your OpenShift cluster and the current project that you are using. By default, this is the project in which you will deploy the Quarkus application.
-
(Optional) To change to another, existing project, use the following command:
$ oc project <project>
-
(Optional) To create a new project in which to deploy the Quarkus application, use the following command:
$ oc new-project <project>
-
-
Use the OpenShift CLI to apply the
ConfigMap
and secret files to the current project in your OpenShift cluster.Applying ConfigMap and secret files to OpenShift project$ oc apply -f ./rhoas-services.yaml $ oc apply -f ./rhoas-secrets.yaml
When you apply the YAML files, the OpenShift CLI shows the names of the
ConfigMap
andSecret
objects that it creates in your OpenShift project, as shown in the following example output:configmap/my-service-context-configuration created secret/service-account-credentials created
-
In your IDE, open the
deployment.yaml
file in thecode-examples/helm-kafka-example/templates
directory of the repository that you cloned.The
deployment.yaml
file is a template file that is part of the Helm chart for the example Quarkus application. The template defines environment variables for the connection information required to connect to your Kafka instance. The environment variables (KAFKA_HOST
,RHOAS_SERVICE_ACCOUNT_OAUTH_TOKEN_URL
,RHOAS_SERVICE_ACCOUNT_CLIENT_ID
, andRHOAS_SERVICE_ACCOUNT_CLIENT_SECRET
) are shown in the following sample from the template file:Example Helm template file for application deploymentspec: containers: - env: - name: KAFKA_HOST valueFrom: configMapKeyRef: name: {{ .Values.rhoas.config }} key: kafka_host - name: RHOAS_SERVICE_ACCOUNT_OAUTH_TOKEN_URL valueFrom: secretKeyRef: name: {{ .Values.rhoas.secret }} key: RHOAS_SERVICE_ACCOUNT_OAUTH_TOKEN_URL - name: RHOAS_SERVICE_ACCOUNT_CLIENT_ID valueFrom: secretKeyRef: name: {{ .Values.rhoas.secret }} key: RHOAS_SERVICE_ACCOUNT_CLIENT_ID - name: RHOAS_SERVICE_ACCOUNT_CLIENT_SECRET valueFrom: secretKeyRef: name: {{ .Values.rhoas.secret }} key: RHOAS_SERVICE_ACCOUNT_CLIENT_SECRET
The template uses parameters called
rhoas.config
andrhoas.secret
to reference the names of yourConfigMap
andSecret
objects. You specify the names of yourConfigMap
andSecret
objects as values for these parameters in a later step, when you install the Helm chart. Also, as you saw previously, theConfigMap
andSecret
objects that you created contain parameters that correspond directly to the key values defined in the template. -
On the command line, navigate to the
code-examples/helm-kafka-example
directory of the repository that you cloned.$ cd app-services-guides/code-examples/helm-kafka-example
-
Deploy the Helm chart, specifying the names of the
ConfigMap
andSecret
objects that you created in your OpenShift project.Deploying the Helm chart$ helm install . --generate-name --set-string rhoas.config=<configmap>,rhoas.secret=service-account-credentials
You use the
--set-string
option to specify the names of theConfigMap
andSecret
objects directly in thehelm install
command. You pass these values to therhoas.config
andrhoas.secret
parameters that are defined in the template for the Helm chart.An alternative way to pass values from theConfigMap
andSecret
objects to the Helm template is to create a YAML file that contains the names of theConfigMap
andSecret
objects. This approach also works for Deployment-based OpenShift applications. For an example of doing this, see the README file that accompanies the Quarkus application used in this example.When you install the Helm chart, Helm automatically processes the contents of the chart
/templates
directory. Helm uses the templates to generate manifests for deployment of the application and creation of a service for the application. Helm provides these manifests to OpenShift. -
When the Helm chart is installed, get the service endpoint for the application on OpenShift.
$ oc get service
You see output like the following example:
Service information for running Quarkus application on OpenShiftNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rhoas-quarkus-kafka-quickstart LoadBalancer 172.30.128.12 a81b115a35629488685b6ed3cf322fbf-1904626303.us-east-2.elb.amazonaws.com 8080:31110/TCP 11m
The output indicates that the Quarkus application is successfully running on your OpenShift cluster.
-
On the command line, copy the value shown under EXTERNAL-IP.
-
In a web browser, go to
<external-ip-value>:8080/prices.html
.You see that the web page continuously updates the
Last price
value. The continuously updating output shows that the Quarkus application is using the connection configuration information that you generated to connect to the Kafka instance defined in your service context. The application uses theprices
topic that you created to produce and consume messages.