Service Binding

  • Red Hat build of Quarkus 2.7
  • Updated 09 March 2023
  • Published 26 July 2022

Service Binding

Red Hat build of Quarkus 2.7
  • Updated 09 March 2023
  • Published 26 July 2022

The following chapter provides information about Service Binding and workload projection that were added to Red Hat build of Quarkus in version 2.7.5 and are currently in the state of Technology Preview.

Generally, OpenShift applications and services, also referred to as deployable workloads, need to be connected to other services for retrieving additional information, such as service URLs or credentials.

The Service Binding Operator manages the required communication for obtaining this information. This Operator then determines the following:

  • How a service consumer intends to bind to such a service

  • The tools for application and service binding, such as the quarkus-kubernetes-service-binding extension

Quarkus supports the Service Binding Specification for Kubernetes to bind services to applications.

Specifically, Quarkus implements the Workload Projection part of the specification, allowing applications to bind to services, such as a Database or a Broker, without the need for user configuration.

To enable Service Binding for the available extensions, add the quarkus-kubernetes-service-binding extension to the application dependencies.

  • The following extensions can be used with Service Binding and are available for Workload Projection:

    • quarkus-jdbc-mariadb

    • quarkus-jdbc-mssql

    • quarkus-jdbc-mysql

    • quarkus-jdbc-postgresql

    • quarkus-mongo-client - Technology Preview

    • quarkus-kafka-client

    • quarkus-smallrye-reactive-messaging-kafka

1. Workload projection

Workload Projection is a process of obtaining the configuration for services from the Kubernetes cluster. This configuration takes the form of directory structures that follow certain conventions and is attached to an application or to a service as a mounted volume. The kubernetes-service-binding extension uses this directory structure to create configuration sources, which allows you to configure additional modules, such as databases or message brokers.

You can use workload projection during application development to connect their application to a development database or other locally-run services without changing the actual application code or configuration.

For an example of a workload projection where the directory structure is included in the test resources and passed to integration test, see the Kubernetes Service Binding datasource GitHub repository.

  • The k8s-sb directory is the root of all service bindings. In this example, only one database called fruit-db is intended to be bound. This binding database has the type file, that indicates postgresql as the database type, while the other files in the directory provide the necessary information to establish the connection.

  • After your Quarkus project obtains information from SERVICE_BINDING_ROOT environment variables that are set by OpenShift, you can locate generated configuration files that are present in the file system and use them to map the configuration-file values to properties of certain extensions.

2. Introduction to the Service Binding Operator

The Service Binding Operator is an Operator that implements Service Binding Specification for Kubernetes and is meant to simplify the binding of services to an application. Containerized applications that support Workload Projection obtain service binding information in the form of volume mounts. The Service Binding Operator reads binding service information and mounts it to the application containers that need it.

The correlation between application and bound services is expressed through the ServiceBinding resources, which declares the intent of what services are meant to be bound to what application.

The Service Binding Operator watches for ServiceBinding resources, which inform the Operator what applications are meant to be bound with what services. When a listed application is deployed, the Service Binding Operator collects all the binding information that must be passed to the application, then upgrades the application container by attaching a volume mount with the binding information.

The Service Binding Operator completes the following actions:

  • Observes ServiceBinding resources for workloads intended to be bound to a particular service

  • Applies the binding information to the workload using volume mounts

The following chapter describes the automatic and semi-automatic service binding approaches and their use cases. With either approach, the kubernetes-service-binding extension generates a ServiceBinding resource. With the semi-automatic approach, users must provide a configuration for target services manually. With the automatic approach, for a limited set of services generating the ServiceBinding resource, no additional configuration is needed.

Additional resources

3. Semi-automatic Service Binding

A service binding process starts with a user specification of required services that will be bound to a certain application. This expression is summarized in the ServiceBinding resource that is generated by the kubernetes-service-binding extension. The use of the kubernetes-service-binding extensions helps users to generate ServiceBinding resources with minimal configuration, therefore simplifying the process overall.

The Service Binding Operator responsible for the binding process then reads the information from the ServiceBinding resource and mounts the required files to a container accordingly.

  • An example of the ServiceBinding resource:

    apiVersion:    1
    kind: ServiceBinding
     name: binding-request
     namespace: service-binding-demo    2
       name: java-app
       group: apps
       version: v1
       resource: deployments
     - group:
       version: v1beta1
       kind: Database
       name: db-demo
       id: postgresDB
    • The quarkus-kubernetes-service-binding extension provides a more compact way of expressing the same information. For example:   1   2

After adding the earlier configuration properties inside your, the quarkus-kubernetes, in combination with the quarkus-kubernetes-service-binding extension, automatically generates the ServiceBinding resource.

The earlier mentioned db-demo property-configuration identifier now has a double role and also completes the following actions:

  • Correlates and groups api-version and kind properties together

  • Defines the name property for the custom resource with a possibility for a later edit. For example:

4. Generating a ServiceBinding custom resource by using the semi-automatic method

Use the following information to generate a ServiceBinding resource semi-automatically. You will also learn more about the OpenShift deployment process, including how to install operators to configure and deploy an application.

The following procedure installs Service Binding Operator and the PostgreSQL Operator from Crunchy Data. Note that PostgreSQL Operator from Crunchy Data is a third-party software with its support policies and terms of use. Then, the procedure creates a PostgreSQL cluster, a simple application, and finally, deploys it and binds it to the provisioned cluster.

  • OpenShift 4.10 cluster created

  • A user has access to OperatorHub and OpenShift Administrator privileges needed to install cluster-wide Operators from OperatorHub

  • oc orchestration tool installed

  • Maven and Java installed


The steps in the following procedure use the HOME (~) directory as a saving and installation destination.

  1. Install the Service Binding Operator version 1.0 and higher using the Installing the Service Binding Operator from the OpenShift Container Platform web UI procedure.

    1. Verify the installation:

      oc get csv -n openshift-operators -w
  2. Install the PostgreSQL Crunchy Operator using the Deploy & use tab of the Crunchy PostgreSQL Operator product page.

    1. Verify the installation:

      oc get csv -n openshift-operators -w
      • When the phase of the operator is set to Succeeded, proceed to the next step.

  3. Create a PostgreSQL cluster:

    1. Create a new OpenShift namespace, in the space of which you will create a cluster and deploy your application later on. Throughout this procedure, the namespace is called demo.

      oc new-project demo
    2. Create the following custom resource and save it as pg-cluster.yml:

      kind: PostgresCluster
        name: hippo
        openshift: true
        postgresVersion: 14
          - name: instance1
              - "ReadWriteOnce"
                  storage: 1Gi
            - name: repo1
                  - "ReadWriteOnce"
                      storage: 1Gi
      This YAML has been reused from Service Binding Operator Quickstart.
    3. Apply the created custom resource:

      oc apply -f ~/pg-cluster.yml
      This command assumes that you saved the pg-cluster.yml file in HOME.
    4. Check the Pods to verify the installation:

      oc get pods -n demo
      • Wait for the Pods to get into the READY state, which signals the installation is complete.

  4. Create a Quarkus application that binds to the PostgreSQL database.

    • The application we are going to create is going to be a simple todo application that will connect to PostgreSQL by using hibernate and panache.

      1. Generate the application:

        mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.7.7.Final-redhat-00005:create \
          -DplatformGroupId=com.redhat.quarkus.platform \
          -DplatformVersion=2.7.7.Final-redhat-00005 \
          -DprojectGroupId=org.acme \
          -DprojectArtifactId=todo-example \
          -DclassName="org.acme.TodoResource" \
      2. Add all required extensions for connecting to PostgreSQL, generating all required resources, and building a container image for our application:

        ./mvnw quarkus:add-extension -Dextensions="resteasy-jackson,jdbc-postgresql,hibernate-orm-panache,openshift,kubernetes-service-binding"
      3. Create a simple entity as follows:

        package org.acme;
        import javax.persistence.Column;
        import javax.persistence.Entity;
        import io.quarkus.hibernate.orm.panache.PanacheEntity;
        public class Todo extends PanacheEntity {
            @Column(length = 40, unique = true)
            public String title;
            public boolean completed;
            public Todo() {
            public Todo(String title, Boolean completed) {
                this.title = title;
      4. Expose the entity:

        package org.acme;
        import javax.transaction.Transactional;
        import java.util.List;
           public class TodoResource {
               public List<Todo> getAll() {
                 return Todo.listAll();
               public Todo get(@PathParam("id") Long id) {
                   Todo entity = Todo.findById(id);
                   if (entity == null) {
                       throw new WebApplicationException("Todo with id of " + id + " does not exist.", Status.NOT_FOUND);
                   return entity;
               public Response create(Todo item) {
                   return Response.status(Status.CREATED).entity(item).build();
               public Response complete(@PathParam("id") Long id) {
                   Todo entity = Todo.findById(id);
          = id;
                   entity.completed = true;
                   return Response.ok(entity).build();
               public Response delete(@PathParam("id") Long id) {
                   Todo entity = Todo.findById(id);
                   if (entity == null) {
                       throw new WebApplicationException("Todo with id of " + id + " does not exist.", Status.NOT_FOUND);
                   return Response.noContent().build();
  5. Bind to the target PostgreSQL cluster by generating a ServiceBinding resource.

    1. Provide the service coordinates to generate the binding and configure the data source:

      • apiVersion:

      • kind: PostgresCluster

      • name: pg-cluster

        This is done by setting a<id>. prefix as in the example below. The id is used to group properties together and can be anything.
    2. Create an import.sql script with some initial data:

      INSERT INTO todo(id, title, completed) VALUES (nextval('hibernate_sequence'), 'Finish the blog post', false);
  6. Deploy the application, including ServiceBinding, and apply it to the cluster:

    mvn clean install -Dquarkus.kubernetes.deploy=true -DskipTests
    • Wait for the deployment to finish.

  1. Verify the deployment:

    oc get pods -n demo -w
  2. Verify the installation

    1. Port forward to http port locally and access the /todo endpoint:

      oc port-forward service/todo-example 8080:80
    2. Open the following URL in a browser:

Additional resources
  • For more information, see the Service Binding Operator section of the Quick Start guide.

5. Automatic Service Binding

The quarkus-kubernetes-service-binding extension can generate the ServiceBinding resource automatically after detecting that an application requires access to the external services that are provided by available bindable Operators.

Automatic service binding can be generated for a limited number of service types. To be consistent with established terminology for Kubernetes and Quarkus services, this chapter refers to these service types as kinds.

Table 1. Operators that support Automatic Service Binding


Api Version



CrunchyData Postgres



Percona XtraDB Cluster



Percona Mongo


Red Hat build of Quarkus 2.7 support for Mongo Operator is provided as a Technology Preview and applies to the client only. Red Hat build of Quarkus 2.7 does not support Panache extensions.

5.1. Automatic datasource binding

For traditional databases, automatic binding is initiated whenever a datasource is configured as follows:


The previous configuration, combined with the presence of quarkus-datasource, quarkus-jdbc-postgresql, quarkus-kubernetes, and quarkus-kubernetes-service-binding properties in the application, results in the generation of the ServiceBinding resource for the postgresql database type.

By using the apiVersion and kind properties of the Operator resource, which matches the used postgresql Operator, the generated ServiceBinding resource binds the service or resource to the application.

When you do not specify a name for your database service, the value of the db-kind property is used as the default name.

 - apiVersion:
   kind: PostgresCluster
   name: postgresql

Specified the name of the datasource as follows:


The service in the generated ServiceBinding then displays as follows:

 - apiVersion:
   kind: PostgresCluster
   name: fruits-db

Similarly, if you use mysql, the name of the datasource can be specified as follows:


The generated service contains the following:

 - apiVersion:
   kind: PerconaXtraDBCluster
   name: fruits-db

5.1.1. Customizing Automatic Service Binding

Even though automatic binding was developed to eliminate as much manual configuration as possible, there are cases where modifying the generated ServiceBinding resource might still be needed. The generation process exclusively relies on information extracted from the application and the knowledge of the supported Operators, which may not reflect what is deployed in the cluster. The generated resource is based purely on the knowledge of the supported bindable Operators for popular service kinds and a set of conventions that were developed to prevent possible mismatches, such as:

  • The target resource name does not match the datasource name

  • A specific Operator needs to be used rather than the default Operator for that service kind

  • Version conflicts that occur when a user needs to use any other version than default or latest

  • The target resource coordinates are determined based on the type of Operator and the kind of service.

  • The target resource name is set by default to match the service kind, such as postgresql, mysql, mongo.

  • For named datasources, the name of the datasource is used.

  • For named mongo clients, the name of the client is used.

Example 1 - Name mismatch

For cases in which you need to modify the generated ServiceBinding to fix a name mismatch, use the properties and specify the service’s name as the service key.

The service key is usually the name of the service, for example the name of the datasource, or the name of the mongo client. When this value is not available, the datasource type, such as postgresql, mysql, mongo, is used instead.

To avoid naming conflicts between different types of services, prefix the service key with a specific datasource type, such as postgresql-<person>.

The following example shows how to customize the apiVersion property of the PostgresCluster resource:

Example 2: Application of a custom name for a datasource

In Example 1, the db-kind(postgresql) was used as a service key. In this example, because the datasource is named, according to convention, the datasource name (fruits-db) is used instead.

The following example shows that for a named datasource, the datasource name is used as the name of the target resource:


This has the same effect as the following configuration:
Additional resources
  • For additional information about the available properties, see the Workload Projection part of the Service Binding specification.

Revised on 2023-03-09 05:33:39 UTC