Connecting client applications to Red Hat OpenShift Application Services using the rhoas CLI

Guide
  • Red Hat OpenShift Application Services 1
  • Updated 22 February 2023
  • Published 28 June 2022

Connecting client applications to Red Hat OpenShift Application Services using the rhoas CLI

Guide
Red Hat OpenShift Application Services 1
  • Updated 22 February 2023
  • Published 28 June 2022

As a developer of applications and services, you might need to connect client applications to instances in cloud services such as Red Hat OpenShift Streams for Apache Kafka and Red Hat OpenShift Service Registry.

For example, suppose you have the following applications running locally on your computer:

  • One application that is designed to publish a stream of stock price updates

  • A second application that is designed to consume the stock price updates and chart them on a dashboard

In addition, suppose you have the following instances in Red Hat OpenShift Application Services:

  • A Kafka instance in OpenShift Streams for Apache Kafka

  • A Service Registry instance in OpenShift Service Registry

Each time the first application produces a stock price update, you want to use the Kafka instance to forward the update as an event to the second, consuming application. In addition, you want each of the applications to use a schema in the Service Registry instance to validate that messages conform to a particular format.

In this scenario, you need to connect your applications to your Kafka and Service Registry instances. To achieve this, you can use the Red Hat OpenShift Application Services (rhoas) command-line interface (CLI) to create a context for your service instances. You can then use another CLI command to generate the required connection information for each service instance in the context.

This guide describes how to use the rhoas CLI to create contexts and then generate the connection configuration information that client applications need to connect to service instances in those contexts.

About service contexts

In Red Hat OpenShift Application Services, a service context is a defined set of instances running in cloud services such as Red Hat OpenShift Streams for Apache Kafka and Red Hat OpenShift Service Registry. You might create different contexts for specific use cases, projects, or environments.

To create a context, you can use the Red Hat OpenShift Application Services (rhoas) command-line interface (CLI). New service instances that you create are automatically added to the context that is currently in use. You can switch between different contexts and add or remove service instances as required. You can include the same service instance in multiple contexts.

When you have created a service context, you can use a single CLI command to generate the configuration information that client applications need to connect to the instances in that context. You can generate connection configuration information in various formats such as an environment variables file (.env), a JSON file, a Java properties file, and a Kubernetes ConfigMap.

Creating new service contexts

The following example shows how to use the CLI to create contexts in Red Hat OpenShift Application Services and then manage the service instances that are defined in those contexts.

Prerequisites
  • You have installed the Red Hat OpenShift Application Services (rhoas) CLI. For more information, see Installing the rhoas CLI.

Procedure
  1. Log in to the CLI.

    $ rhoas login

    The login command opens a sign-in process in your web browser.

  2. Use the CLI to create a new service context.

    $ rhoas context create --name development-context

    The new context becomes the current (that is, active) context by default.

  3. Create a new Kafka instance in Red Hat OpenShift Streams for Apache Kafka.

    $ rhoas kafka create --name my-kafka-instance

    The CLI automatically adds the Kafka instance to the current context.

  4. Create a new Service Registry instance in Red Hat OpenShift Service Registry.

    $ rhoas service-registry create --name my-registry-instance

    The CLI also automatically adds the Service Registry instance to the current context.

  5. Confirm that the Kafka and Service Registry instances are in the current context and are running.

    Command to view status of service instances in current context
    $ rhoas context status

    You see output that looks like the following example:

    Status of example context with single Kafka and Service Registry instances
    Service Context Name:	development-context
    Context File Location:	/home/<user-name>/.config/rhoas/contexts.json
    
    Kafka
    -----------------------------------------------------------------------------
    ID:                cafkr2jma40lhulbl1c0
    Name:              my-kafka-instance
    Status:            ready
    Bootstrap URL:     kafka-inst-cafkr-jma--lhulbl-ca.bf2.kafka.rhcloud.com:443
    
    Service Registry
    ------------------------------------------------------------------------------
    ID:                0aa1dd8b-63d5-466c-9de8-7c03320a81c2
    Name:              my-registry-instance
    Status:            ready
    Registry URL:      https://bu98.serviceregistry.rhcloud.com/t/0aa1dd8b-63d5-466c-9de8-7c03320a81c2
  6. Create another service context.

    $ rhoas context create --name production-context

    The new service context becomes the current context.

  7. Add the Kafka instance that you created earlier in the procedure to the new service context.

    $ rhoas context set-kafka --name my-kafka-instance

    The Kafka instance is now part of both of the service contexts that you created.

To remove a particular service from a context, use the context unset command with the --services flag and specify the service name, for example, kafka or service-registry. An example is shown.

$ rhoas context unset --name development-context --services kafka
Additional resources

Generating connection configuration information for a local Quarkus application

The following example shows how to connect an example Quarkus application running locally on your computer to the service instances defined in a context in Red Hat OpenShift Application Services. Quarkus is a Kubernetes-native Java framework that is optimized for serverless, cloud, and Kubernetes environments.

The Quarkus application uses a topic in a Kafka instance to produce and consume a stream of quote values and display these on a web page. The application consists of two components:

  • A producer component that periodically produces a new quote value and publishes this to a Kafka topic called quotes.

  • A consumer component that streams quote values from the Kafka topic. This component also has a minimal front end that uses server-sent events to show the quote values on a web page.

In addition, the producer and consumer components serialize and deserialize Kafka messages using an Avro schema stored in Service Registry. Use of the schema ensures that message values conform to a defined format.

Prerequisites
  • You have a Red Hat account.

  • You have a service context with a Kafka and Service Registry instance.

  • Git is installed.

  • You have an IDE such as IntelliJ IDEA, Eclipse, or Visual Studio Code.

  • OpenJDK 11 or later is installed. (The latest LTS version of OpenJDK is recommended.)

  • Apache Maven 3.8 (or a later Maven 3 release) is installed.

Procedure
  1. On the command line, clone the Red Hat OpenShift Application Services Guides and Samples repository from GitHub.

    $ git clone https://github.com/redhat-developer/app-services-guides app-services-guides
  2. In your IDE, open the code-examples/quarkus-service-registry-quickstart directory from the repository that you cloned.

    You see that the sample Quarkus application has two components - a producer component and a consumer component. The producer component publishes a stream of quote values to a Kafka topic. The consumer component consumes these values and displays them on a web page.

  3. On the command line, create the quotes topic required by the Quarkus application.

    $ rhoas kafka topic create --name quotes
  4. Ensure that you are using the service context that includes your Kafka and Service Registry instances, as shown in the following example:

    $ rhoas context use --name development-context
  5. In the guides and samples repository that you cloned, navigate to the directory for the Quarkus application.

    $ cd ~/app-services-guides/code-examples/quarkus-service-registry-quickstart/
  6. Create a service account for the Quarkus application to authenticate with the Kafka and Service Registry instances in the context. Save the credentials in an environment variables file in the directory for the producer component.

    $ rhoas service-account create --file-format env --output-file ./producer/.env
  7. Generate an environment variables file that contains the connection configuration information required by the producer component.

    $ rhoas generate-config --type env --output-file ./producer/rhoas.env
  8. Append the contents of the connection configuration file to the service account environment variables file.

    $ cat ./producer/rhoas.env >> ./producer/.env
  9. Copy the updated .env file to the directory for the consumer component, as shown in the following Linux example:

    $ cp ./producer/.env ./consumer/.env

    For a service context with single Kafka and Service Registry instances, the .env file looks like the following example:

    Example environment variables file for connection configuration and credentials
    ## Generated by rhoas cli
    RHOAS_SERVICE_ACCOUNT_CLIENT_ID=<client-id>
    RHOAS_SERVICE_ACCOUNT_CLIENT_SECRET=<client-secret>
    RHOAS_SERVICE_ACCOUNT_OAUTH_TOKEN_URL=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
    ## Generated by rhoas cli
    ## Kafka Configuration
    KAFKA_HOST=kafka-inst-cafkr-jma--lhulbl-ca.bf2.kafka.rhcloud.com:443
    ## Service Registry Configuration
    SERVICE_REGISTRY_URL=https://bu98.serviceregistry.rhcloud.com/t/0aa1dd8b-63d5-466c-9de8-7c03320a81c2
    SERVICE_REGISTRY_CORE_PATH=/apis/registry/v2
    SERVICE_REGISTRY_COMPAT_PATH=/apis/ccompat/v6

    As shown in the example, the file that you generate contains the endpoints for your service instances, and the credentials required to connect to those instances.

  10. Set Access Control List (ACL) permissions to enable the new service account to access resources in the Kafka instance.

    Example command for granting access to Kafka instance
    $ rhoas kafka acl grant-access --producer --consumer --service-account <client-id> --topic quotes --group all

    The command you entered allows applications to use the service account to produce and consume messages in the quotes topic. Applications can use any consumer group.

  11. Use Role-Based Access Control (RBAC) to enable the new service account to access the Service Registry instance and the artifacts (such as schemas) that it contains.

    Example command for granting access to Service Registry instance
    $ rhoas service-registry role add --role manager --service-account <client-id>
  12. In the guides and samples repository, navigate to the directory for the producer component. Use Apache Maven to run the producer component in developer mode.

    $ cd ~/app-services-guides/code-examples/quarkus-service-registry-quickstart/producer
    $ mvn quarkus:dev

    The producer component starts to generate quote values to the quotes topic in the Kafka instance.

    The Quarkus application also created an Avro schema called quotes-value in the Service Registry instance. The producer and consumer components use the schema to ensure that message values conform to a defined format.

    To view the contents of the quotes-value schema, run the following command:

    $ rhoas service-registry artifact get --artifact-id quotes-value

    You see output that looks like the following example:

    Example Avro schema in Service Registry
    {
      "type": "record",
      "name": "Quote",
      "namespace": "org.acme.kafka.quarkus",
      "fields": [
        {
          "name": "id",
          "type": {
            "type": "string",
            "avro.java.string": "String"
          }
        },
        {
          "name": "price",
          "type": "int"
        }
      ]
    }
  13. With the producer component still running, open a second command-line window or tab. In the guides and samples repository, navigate to the directory for the consumer component and run the component in developer mode.

    $ cd ~/app-services-guides/code-examples/quarkus-service-registry-quickstart/consumer
    $ mvn quarkus:dev

    The consumer component starts to consume the stream of quote values from the quotes topic.

  14. In a web browser, go to http://localhost:8080/quotes.html.

    You see that the consumer component displays the stream of quote values on the web page. This output shows that the Quarkus application used the connection configuration information that you generated to connect to the Kafka and Service Registry instances in your service context.

Generating connection configuration information for a Helm-based OpenShift application

The following example shows how to use Helm to deploy an example Quarkus application on Kubernetes and then connect the Quarkus application to a Kafka instance in Red Hat OpenShift Streams for Apache Kafka. Helm is an open source utility for creating, configuring, and deploying applications on Kubernetes. The Kubernetes platform referred to in the remainder of this example is Red Hat OpenShift.

One part of the example Quarkus application is designed to generate random price values and produce these to a Kafka topic. Another part of the application consumes the numbers from the Kafka topic. The application uses server-sent events to expose the values as a REST UI. Finally, a web page in the application displays the exposed values.

To connect the Quarkus application to a Kafka instance in OpenShift Streams for Apache Kafka, you define a service context that includes the Kafka instance. You then generate connection configuration information for the service context as an OpenShift ConfigMap object and provide this to the application.

Prerequisites
Procedure
  1. On the command line, clone the Red Hat OpenShift Application Services Guides and Samples repository in GitHub.

    git clone https://github.com/redhat-developer/app-services-guides app-services-guides
  2. Log in to the rhoas CLI.

    $ rhoas login

    The login command opens a sign-in process in your web browser.

  3. Check the service context that is currently in use.

    $ rhoas context list

    The CLI shows a check mark next to the service context that is currently in use.

  4. (Optional) To switch to a different service context that includes your Kafka instance, use the following command. Replace <service-context> with your service context name.

    $ rhoas context use --name <service-context>
  5. Generate connection configuration information for the service context as an OpenShift ConfigMap and save this in a YAML file, as shown in the following example:

    Generating ConfigMap from service context
    $ rhoas generate-config --type configmap --output-file ./rhoas-services.yaml

    For a service context with a single Kafka instance, the contents of the ConfigMap file look like the following example:

    Example ConfigMap file generated from service context
    apiVersion: v1
    kind: ConfigMap
    metadata:
        name: <service-context>-configuration
    data:
        ## Kafka Configuration
        kafka_host: <host>:<port>

    As indicated in the example, the name of the generated ConfigMap object has a format of <service-context>-configuration.

  6. On the command line, create the prices topic required by the Quarkus application.

    $ rhoas kafka topic create --name prices
  7. Create a service account for the Quarkus application to authenticate with your Kafka instance by performing the following actions.

    1. Create a new service account as an OpenShift secret and save this in a YAML file, as shown in the following example:

      $ rhoas service-account create --file-format secret --output-file ./rhoas-secrets.yaml
    2. Open the YAML file for the secret in your IDE.

      The contents of the file look like the following example:

      Example secret file for service account credentials
      apiVersion: v1
      kind: Secret
      metadata:
        name: service-account-credentials
      type: Opaque
      stringData:
        RHOAS_SERVICE_ACCOUNT_CLIENT_ID: <client-id>
        RHOAS_SERVICE_ACCOUNT_CLIENT_SECRET: <client-secret>
        RHOAS_SERVICE_ACCOUNT_OAUTH_TOKEN_URL: <oauth-token-endpoint-uri>

      As indicated in the example, the generated secret has a default name of service-account-credentials.

    3. Set Access Control List (ACL) permissions to enable the new service account to access resources in the Kafka instance.

      Example command for granting access to Kafka instance
      $ rhoas kafka acl grant-access --producer --consumer --service-account <client-id> --topic prices --group all

      The command you entered allows applications to use the service account to produce and consume messages in the prices topic. Applications can use any consumer group.

  8. Log in to the OpenShift CLI using a token by performing the following actions.

    1. Log in to the OpenShift web console as a user who has privileges to create a new project in the cluster.

    2. In the upper-right corner of the console, click your user name and select Copy login command.

      A new page opens.

    3. Click the Display Token link.

    4. In the Log in with this token section, copy the full oc login command shown.

    5. On the command line, right-click and select Paste.

      You see output confirming that you are logged in to your OpenShift cluster and the current project that you are using. By default, this is the project in which you will deploy the Quarkus application.

    6. (Optional) To change to another, existing project, use the following command:

      $ oc project <project>
    7. (Optional) To create a new project in which to deploy the Quarkus application, use the following command:

      $ oc new-project <project>
  9. Use the OpenShift CLI to apply the ConfigMap and secret files to the current project in your OpenShift cluster.

    Applying ConfigMap and secret files to OpenShift project
    $ oc apply -f ./rhoas-services.yaml
    $ oc apply -f ./rhoas-secrets.yaml

    When you apply the YAML files, the OpenShift CLI shows the names of the ConfigMap and Secret objects that it creates in your OpenShift project, as shown in the following example output:

    configmap/my-service-context-configuration created
    secret/service-account-credentials created
  10. In your IDE, open the deployment.yaml file in the code-examples/helm-kafka-example/templates directory of the repository that you cloned.

    The deployment.yaml file is a template file that is part of the Helm chart for the example Quarkus application. The template defines environment variables for the connection information required to connect to your Kafka instance. The environment variables (KAFKA_HOST,RHOAS_SERVICE_ACCOUNT_OAUTH_TOKEN_URL, RHOAS_SERVICE_ACCOUNT_CLIENT_ID, and RHOAS_SERVICE_ACCOUNT_CLIENT_SECRET) are shown in the following sample from the template file:

    Example Helm template file for application deployment
    spec:
      containers:
        - env:
            - name: KAFKA_HOST
              valueFrom:
                configMapKeyRef:
                  name: {{ .Values.rhoas.config }}
                  key: kafka_host
            - name: RHOAS_SERVICE_ACCOUNT_OAUTH_TOKEN_URL
              valueFrom:
                secretKeyRef:
                  name:  {{ .Values.rhoas.secret }}
                  key: RHOAS_SERVICE_ACCOUNT_OAUTH_TOKEN_URL
            - name: RHOAS_SERVICE_ACCOUNT_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name:  {{ .Values.rhoas.secret }}
                  key: RHOAS_SERVICE_ACCOUNT_CLIENT_ID
            - name: RHOAS_SERVICE_ACCOUNT_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name:  {{ .Values.rhoas.secret }}
                  key: RHOAS_SERVICE_ACCOUNT_CLIENT_SECRET

    The template uses parameters called rhoas.config and rhoas.secret to reference the names of your ConfigMap and Secret objects. You specify the names of your ConfigMap and Secret objects as values for these parameters in a later step, when you install the Helm chart. Also, as you saw previously, the ConfigMap and Secret objects that you created contain parameters that correspond directly to the key values defined in the template.

  11. On the command line, navigate to the code-examples/helm-kafka-example directory of the repository that you cloned.

    $ cd app-services-guides/code-examples/helm-kafka-example
  12. Deploy the Helm chart, specifying the names of the ConfigMap and Secret objects that you created in your OpenShift project.

    Deploying the Helm chart
    $ helm install . --generate-name --set-string rhoas.config=<configmap>,rhoas.secret=service-account-credentials

    You use the --set-string option to specify the names of the ConfigMap and Secret objects directly in the helm install command. You pass these values to the rhoas.config and rhoas.secret parameters that are defined in the template for the Helm chart.

    An alternative way to pass values from the ConfigMap and Secret objects to the Helm template is to create a YAML file that contains the names of the ConfigMap and Secret objects. This approach also works for Deployment-based OpenShift applications. For an example of doing this, see the README file that accompanies the Quarkus application used in this example.

    When you install the Helm chart, Helm automatically processes the contents of the chart /templates directory. Helm uses the templates to generate manifests for deployment of the application and creation of a service for the application. Helm provides these manifests to OpenShift.

  13. When the Helm chart is installed, get the service endpoint for the application on OpenShift.

    $ oc get service

    You see output like the following example:

    Service information for running Quarkus application on OpenShift
    NAME                                TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)          AGE
    rhoas-quarkus-kafka-quickstart      LoadBalancer   172.30.128.12    a81b115a35629488685b6ed3cf322fbf-1904626303.us-east-2.elb.amazonaws.com   8080:31110/TCP   11m

    The output indicates that the Quarkus application is successfully running on your OpenShift cluster.

  14. On the command line, copy the value shown under EXTERNAL-IP.

  15. In a web browser, go to <external-ip-value>:8080/prices.html.

    You see that the web page continuously updates the Last price value. The continuously updating output shows that the Quarkus application is using the connection configuration information that you generated to connect to the Kafka instance defined in your service context. The application uses the prices topic that you created to produce and consume messages.