Chapter 7. Securing data

To prevent unauthorized access to data, you can implement the following measures:

  • Encrypt communications between the virtual database and other database clients and servers.
  • Use OpenShift secrets to store the values of properties.
  • Configure integration with Red Hat Single Sign-On in OpenShift to enable OpenID-Connect authentication and OAuth2 authorization.
  • Apply role-based access controls to your virtual database.
  • Configure 3Scale to secure OData API endpoints.

7.1. Certificates and data virtualization

You can use TLS to secure communications between the data virtualization service and other services. For example, you can use TLS to encrypt the traffic that the service exchanges during the following operations:

  • Responding to queries from JDBC and postgreSQL clients.
  • Responding to calls from REST or OData APIs over HTTPS.
  • Communicating with a Keycloak/RH-SSO server.
  • Communicating with SFTP data sources.

Certificate types

To encrypt traffic, you must add a TLS certificate for the virtual database service to the cluster. You can use either of two type of certificates to configure encryption. The certificate can be either a self-signed service certificate that is generated by the OpenShift certificate authority, or a custom certificate from a trusted third-party Certificate Authority (CA). If you use a custom certificate, you must configure it before you build and deploy the virtual database.

Certificate scope

After you configure a certificate for the data virtualization service, you can use the certificate for all of the data virtualization operations within the OpenShift cluster.

7.1.1. Service-generated certificates

Service certificates provide for encrypted communications with internal and external services alike. However, only internal services, that is, services that are deployed in the same OpenShift cluster, can validate the authenticity of a service certificate.

OpenShift service certificates have the following characteristics:

  • Consist of a public key certificate (tls.crt) and a private key (tls.key) in PEM base-64-encoded format.
  • Stored in an encryption secret in the OpenShift pod.
  • Signed by the OpenShift CA.
  • Valid for one year.
  • Replaced automatically before expiration.
  • Can be validated by internal services only.

Using a service-generated certificate is the simplest way to secure communications between a virtual database and other applications and services in the cluster. When you run the Data Virtualization Operator to create a virtual database, it checks for the existence of a secret that has the same name as the virtual database that is defined in the CR, for example, VDB_NAME-certificates..

If the Operator detects a secret with a name that matches the virtual database name, it converts certificate and key in the secret into a Java Keystore. The Operator then configures the Keystore for use with the virtual database container that it deploys.

If the Operator does not find a secret with the name of the virtual database, it creates a service-generated certificate files in PEM format to define the public key certificate and private encryption key for the service. Also known as a service serving certificates, a service-generated certificate originates from the OpenShift Certificate Authority.

The following certificate files are created:

  • tls.crt - TLS public key certificate
  • tls.key - TLS private encryption key

The Operator stores the generated certificate files in a secret with the name of the virtual database: VDB_NAME-certificates.

When the certificate and key are converted to a Keystore, the Operator also adds the default Truststore from the Kubernetes /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt certificate to the cluster. After the Keystore and Truststore are deployed, the virtual database service can communicate securely with other services in the cluster, as long as they use service-based certificates. However, the virtual database cannot exchange secure communications with services that are use other certificate types, such as custom certificates, or other types of self-signed certificates. To enable communication with these services, and with services that hosted outside of the cluster, you must configure the service to use custom certificates.

Additional resources

For more information about service serving certificates in OpenShift, see the OpenShift Authentication documentation.

7.1.2. Custom certificates

To support secure communications between the virtual database service and applications outside of the OpenShift cluster, you can obtain custom certificates from a trusted third-party Certificate Authority.

External services do not recognize the validity of certificates that are generated by the OpenShift certificate authority. For an external services to validate custom TLS certificates, the certificates must originate from a trusted, third-party certificate authority (CA). Such certificates are universally recognized, and can be verified by any client. Information about how to obtain a certificate from a third-party CA is beyond the scope of this document.

You can add custom certificates to a virtual database by supplying information about the certificate in an encryption secret and deploying the secret to OpenShift before you run the Data Virtualization Operator to create the service. After you deploy an encryption secret to OpenShift, it becomes available to the Data Virtualization Operator when it creates a virtual database. The Operator detects the secret with the name that matches the name of the virtual database in the CR, and it automatically configures the virtual database service to use the specified certificate to encrypt communications with other services.

7.1.3. Using custom TLS certificates to encrypt communications between a virtual database and other services

Data virtualization uses TLS certificates to encrypt network traffic between a virtual database service and the clients or data sources that the service communicates with.

To ensure that both internal and external services can authenticate the CA signature on the TLS certificates, the certificate must originate from trusted third-party certificate authorities (CA).

To configure the data virtualization service to use a custom certificate to encrypt traffic, you add the certificate details to an OpenShift secret and deploy the secret to the namespace where you want to create the virtual database. You must create the secret before you create the service.

To use custom TLS certificates, you first generate a keystore and truststore from the certificates. You then convert the binary content of those files into Base64 encoding. After you convert the content, you add the content to a secret and deploy it to OpenShift.

7.1.4. Creating a keystore from the private key and public key certificate

To support secure communications between the virtual database service and applications outside of the OpenShift cluster, you can obtain custom certificates from a trusted third-party Certificate Authority.

Adding a custom keystore and truststore for your TLS certificates provides the most flexible framework for securing communications, enabling you to define a configuration that works in any situation.

You must complete the following general tasks:

  • Obtain a TLS certificate from a CA.
  • Build a keystore in PKCS12 format from the public certificate and private key in the CA certificate.
  • Convert the keystore to Base64 encoding.

Prerequisite

  • You have a TLS certificate from a trusted, third-party CA.
  • You have the private key (tls.key) and public key certificate (tls.crt) from the TLS certificate in PEM format.

Procedure

  1. Taking as input the tls.key and tls.crt from the certificate in PEM format, run the following command to create a keystore.

    openssl pkcs12 -export -in tls.crt -inkey tls.key -out keystore.pkcs12

    The following table describes the elements of the openssl command:

    Table 7.1. openssl command to generate a keystore from certificate files

    Command elementDescription

    pkcs12

    The openssl pkcs12 utility.

    -export

    Exports the file.

    -in tls.crt

    Identifies the certificate file.

    -inkey tls.key

    Identifies the key to import into the keystore.

    -out keystore.pkcs12

    Specifies the name of the keystore file to create.

  2. From the keystore.pkcs12 file that you generated in the previous step, type the following command to convert the file to Base64 encoding:

    base64 keystore.pkcs12
  3. Copy the contents of the command output to a YAML secret file.

    For more information, see Section 7.1.7, “Creating an OpenShift secret to store the keystore and truststore”.

7.1.5. Creating a truststore from the public key certificate

Prerequisite

  • You have the Java 11 or later version of the keytool key and certificate management utility, which uses PKCS12 as the default format.
  • You have a public key certificate (tls.crt) in PKCS12 format.

Procedure

In the steps that follow, after you generate Base64 encodings for the keystore and truststore keys, add the content to the YAML file.

  1. From a terminal window, using the public key certificate, tls.crt, type the following command:

    keytool -import -file tls.crt -alias myalias -keystore truststore.pkcs12
  2. Type the following command to convert the output to Base64 encoding:

    base64 truststore.pkcs12
  3. Copy the contents of the command output and paste it into the secret.

7.1.6. Adding the keystore and truststore passwords to the configuration

To use the custom keystores, you must specify the passwords to use in virtual database operations. Provide the passwords as environment properties in the custom resource for the virtual database by setting the following properties.

  • TEIID_SSL_KEYSTORE_PASSWORD
  • TEIID_SSL_TRUSTSTORE_PASSWORD
  • KEYCLOAK_TRUSTSTORE_PASSWORD (For use with Red Hat SSO/Keycloak, if the trust manager is not disabled)

The following example shows an excerpt from a virtual database CR in which the preceding variables are defined:

Example: Custom resource showing environment variables to define passwords for certificate keystore and truststores

apiVersion: teiid.io/v1alpha1
kind: VirtualDatabase
metadata:
  name: dv-customer
spec:
  replicas: 1
  env:
  - name: TEIID_SSL_KEYSTORE_PASSWORD
    value: KEYSTORE_PASSWORD
  - name: TEIID_SSL_TRUSTSTORE_PASSWORD
    value: TRUSTSTORE_PASSWORD
  - name: KEYCLOAK_TRUSTSTORE_PASSWORD
    value: SSO_TRUSTSTORE_PASSWORD

  ... additional content removed for brevity

After you configure the cluster to use the preceding certificates in the keystore and truststore, the virtual database can use the certificates to encrypt or decrypt communications with services.

After you deploy the TLS secret to OpenShift, run the Data Virtualization Operator to create a virtual database with the name that is specified in the secret. For more information, see Section 6.2, “Deploying virtual databases”.

When the Operator creates the virtual database, it searches for a secret that has a name that matches the name specified in the CR for the virtual database service. If it finds a matching secret, the Operator configures the service to use the secret to encrypt communications between the virtual database service and other applications and services.

7.1.7. Creating an OpenShift secret to store the keystore and truststore

  • Create a YAML file
  • Add the keystore to an OpenShift secret.
  • Deploy the secret to OpenShift.

    1. Create a YAML file to define a secret of type Opaque with the name {vdb-name}-keystore, and include the following information: +
  • The keystore and truststore generated from a TLS key pair, in Base64 encoding.
  • The name of the virtual database that you want to create.
  • The OpenShift namespace in which you want to create the virtual database.

Prerequisites

  • You have Developer or Administrator access to the OpenShift project where you want to create the secret and virtual database.
  • You have the keystore and truststore in Base64 format that you generated from a custom TLS certificate.

Procedure

  1. Create a YAML file to define a secret of type kubernetes.io/tls, and include the following information:

    • The public and private keys of the TLS key pair.
    • The name of the virtual database that you want to create.
    • The OpenShift namespace in which you want to create the virtual database.

      For example:

      apiVersion: v1
      kind: Secret
      type: Opaque
      metadata:
        name: _VDB_NAME-keystore 1
        namespace: PROJECT_NAME 2
      
      data: 3
        keystore.pkcs12: >-
            -----BEGIN KEYSTORE-----
            [...]
            -----END KEYSTORE-----
        truststore.pkcs12: >-
            -----BEGIN TRUSTSTORE-----
            [...]
            -----END TRUSTSTORE-----
      1
      The name of the secret. The secret name must match the name of the virtual database object in the CR YAML file that the Data Virtualization Operator uses to create a virtual database, for example, dv-customer.
      2
      The OpenShift namespace in which the virtual database service is deployed, for example, myproject.
      3
      The data value is made up of the contents of the TLS keystore certificate (keystore.pkcs12), and the truststore (truststore.pkcs12) in base64-encoded format.
  2. Save the file as dv-customer-keystore.yaml.
  3. Open a terminal window, sign in to the OpenShift project where you want to add the secret, and then type the following command:

    $ oc apply -f tls_secret.yaml

The following example shows a YAML file for creating an OpenShift secret to store a keystore and truststore.

Example: dv-customer-keystore.yaml file

An example script to create the secret will be like below

dv-customer-keystore.yaml

kind: Secret
apiVersion: v1
metadata:
  name: dv-customer-keystore 1
  namespace: my-project
data:
  keystore.pkcs12: >-
    MIIKAwIBAzCCCc8GCSqGSIb3DQEHAaCCCcAEggm8MIIJuDCCBG8GCSqGSIb3DQEHBqCCBGAwggRcAgEAMIIEVQYJKoZIhvcNAQcBMBwGCiqGSIb3DQEMAQYwDgQI7RFjrbx64PkCAggAgIIEKLn2Y244Jw2O37QlmT+uS3XE0vErwJIIwpwR8nlu8YDPTU8UtN3nDXNkdKbolQVTCVnzFSbJ7RohoAEJdB3D8iDkVpFpvIbBYUvq3LB8obFuSuFP50IMprp9gnUjRit5/nOGdJIKiM3ZJQgE46gvYsQJiKo0CGlBf/7J9CWh/zwp7fV4JzxZaW/4bkUaz1jegPx0lYEPW14U1lNF0BCn0DnTffCHnSqQIlW+KwNj3uNtqVqLTt4LoLTbfvCjYN+6q+0Ei67a05g8X2ZI7y7LJvfRGlAssVwqOIOMeiwajOtsJXXaN1WsZjURFVIJpmlWAcG/72J9xUlA5YYUzdxI8GdaQis78b0lsvYPU0WqCMBmfoJMxhQuIfpZVqDgqTTaJhvrv7lw/VYLyJKG1N0pAQ9dDnWUtje7MB+Q853ffjzZ5PDp8G+BoxUrsxABheslI8PwIb0JG66yxyBmgvpxlGVN1IyHLKZzn3/RUCA8/WLjuD1SAmFQxfDoOir1LEnodXLEVLH+8/Ety0xvP5T9BFn/YVsPSjplhukkdfqiHDqxffg8aJlpfOC8AJ4EVItb/W8fBQovQ+jhm1LpuQedA6fiaROYYHChaQM94y9HqPIveCEpKGkG47ohGWU/LCht/Da3iHhl6h9BCX/U/PcsojKy8ZmzZTJb+oIRCx+A84X/hObGoqU+dOItQ//G37BIL7jIcQ9gwShtQhXmdCtrh10iNKGxaDxyBBJS64+KeuAv16eyj3UHoR3Ux9P3RVzZ6bH+IrKsWRacg+JYzEZNAzo0NYkVCqgvbdC+fWDtq6rQA2knjRhwwK/WU/=
  truststore.pkcs12: >-
    MIIWlgIBAzCCFmIGCSqGSIb3DQEHAaCCFlMEghZPMIIWSzCCFkcGCSqGSIb3DQEHBqCCFjgwghY0AgEAMIIWLQYJKoZIhvcNAQcBMBwGCiqGSIb3DQEMAQYwDgQIq4NIOxI8IoUCAggAgIIWAN8YKMvjIo6qGX2Rz0SIKiDlUNySI5GKjt1RKicid9QIVfyKjWhjufqn+OXjhaxYJtZ+GgW3SdO1il0cHEGSQycEJPQ/diAMqmdgoyd1batEYxp1baR9wm4aqmYip0j3Xd84fpQylTs73tFOZYWJYPDqq27jYYbEUL0bOKkoMOvIftW6y18gT/E8XVYi7Gy81IJzNnhQkyt4bZO3/vyoEgvyUDGLCtFxSk4U9JiGk3RtzLW1HnOiGof1B/lJs7vHe13QITJWqxhqKs4rWYj8pOiyrIhAcLtGMEUH9cyQ7gpYFvx5KObY//gEDr2MnRdR4cm79wuffg9mUH96hvqwrm/dpJC1lP+dRM/9Alyn9KEuEilWaUOxkHobvcCs04fh2Fw8GS4wdCAiB7Rj3e2U1duWdg3MJ5Qxq4SVEZeTPkDKetqZTTWpzDiw8nxgZx7MGYAQ5kIYeWHWzVs9fFDuNFTnvhEb535KMz6qZYMjJdiZRVhX5XyCKyLyiBovQsdHDUkuubroJfUFe3VI7FNGNVJ1OIuqrIVJVYIpqER6khWoCOizm/L1PWU8XS6fsR3ES296VaukzAyewQIpQhEek9XjRY=
type: Opaque

1
dv-customer is name of the virtual database that will use the keystore.

Based on the format of the preceding example, create a secret file with the content from your keystore and truststore.

  1. Type the following command to add the secret to OpenShift:

    oc create -f dv-customer-keystore.yaml

7.2. Using secrets to store data source credentials

Create and deploy secret objects to store values for your environment variables.

Although secrets exist primarily to protect sensitive data by obscuring the value of a property, you can use them to store the value of any property.

Prerequisites

  • You have the login credentials and other information that are required to access the data source.

Procedure

  1. Create a secrets file to contain the credentials for your data source, and save it locally as a YAML file. For example,

    Sample secrets.yml file

    apiVersion: v1
    kind: Secret
    metadata:
      name: postgresql
    type: Opaque
    stringData:
      database-user: bob
      database-name: sampledb
      database-password: bob_password

  2. Deploy the secret object on OpenShift.

    1. Log in to OpenShift, and open the project that you want to use for your virtual database. For example,

      oc login --token=<token> --server=https://<server>oc project <projectName>

    2. Run the following command to deploy the secret file:

      oc create -f ./secret.yaml

  3. Set an environment variable to retrieve its value from the secret.

    1. In the environment variable, use the format valueFrom:/secretKeyRef to specify that the variable retrieves it value from a key in the secret that you created in Step 1.

      For example, in the following excerpt, the SPRING_DATASOURCE_SAMPLEDB_PASSWORD retrieves its value from a reference to the database-password key of the postgresql secret:

      - name: SPRING_DATASOURCE_SAMPLEDB_PASSWORD
       valueFrom:
          secretKeyRef:
            name: postgresql
            key: database-password

The following example shows the use of secrets to define the username and password properties for a postgreSQL datbase.

Sample data source configuration that uses secrets to define properties

datasources:
  - name: sampledb
    type: postgresql
    properties:
      - name: username
        valueFrom:
          secretKeyRef:
            name: sampledb-secret
            key: username
      - name: password
        valueFrom:
          secretKeyRef:
            name: sampledb-secret
            key: password
      - name: jdbc-url
        value: jdbc:postgresql://database/postgres

In the preceding example, sampledb denotes a custom name that is assigned to the source database. The same name would be assigned to the SERVER definition for the data source in the DDL for the virtual database. For example, CREATE SERVER sampledb FOREIGN DATA WRAPPER postgresql.

Additional resources

7.3. Securing OData APIs for a virtual database

You can integrate data virtualization with Red Hat Single Sign-On and Red Hat 3scale API Management to apply advanced authorization and authentication controls to the OData endpoints for your virtual database services.

The Red Hat Single Sign-On technology uses OpenID-Connect as the authentication mechanism to secure the API, and uses OAuth2 as the authorization mechanism. You can integrate data virtualization with Red Hat Single Sign-On alone, or along with 3scale.

By default, after you create a virtual database, the OData interface to it is discoverable by 3scale, as long as the 3scale system is defined to same cluster and namespace. By securing access to OData APIs through Red Hat Single Sign-On, you can define user roles and implement role-based access to the API endpoints. After you complete the configuration, you can control access in the virtual database at the level of the view, column, or data source. Only authorized users can access the API endpoint, and each user is permitted a level of access that is appropriate to their role (role-based access). By using 3scale as a gateway to your API, you can take advantage of 3scale’s API management features, allowing you to tie API usage to authorized accounts for tracking and billing.

When a user logs in, 3scale negotiates authentication with the Red Hat Single Sign-On package. If the authentication succeeds, 3scale passes a security token to the OData API for verification. The OData API then reads permissions from the token and applies them to the data roles that are defined for the virtual database.

Prerequisites

  • Red Hat Single Sign-On is running in the OpenShift cluster. For more information about deploying Red Hat Single Sign-On, see the Red Hat Single Sign-On for OpenShift documentation.
  • You have Red Hat 3scale API Management installed in the OpenShift cluster that hosts your virtual database.
  • You have configured integration between 3scale and Red Hat Single Sign-On. For more information, see Configuring Red Hat Single Sign-On integration in Using the Developer Portal.

    • You have assigned the realm-management and manage-clients roles.
    • You created API users and specified credentials.
    • You configured 3scale to use OpenID-Connect as the authentication mechanism and OAuth2 as the authorization mechanism.

7.3.1. Configuring Red Hat Single Sign-On to secure OData

You must add configuration settings in Red Hat Single Sign-On to enable integration with data virtualization.

Prerequisites

  • Red Hat Single Sign-On is running in the OpenShift cluster. For information about deploying Red Hat Single Sign-On, see the link:Red Hat Single Sign-On for OpenShift[Red Hat Single Sign-On] documentation.
  • You run the Data Virtualization Operator to create a virtual database in the cluster where Red Hat Single Sign-On is running.

Procedure

  1. From a browser, log in to the Red Hat Single Sign-On Admin Console.
  2. Create a realm for your data virtualization service.

    1. From the menu for the master realm, hover over Master and then click Add realm.
    2. Type a name for the realm, such as datavirt, and then click Create.
  3. Add roles.

    1. From the menu, click Roles.
    2. Click Add Role.
    3. Type a name for the role, for example ReadRole, and then click Save.
    4. Create other roles as needed to map to the roles in your organization’s LDAP or Active Directory. For information about federating user data from external identity providers, see the Server Administration Guide.
  4. Add users.

    1. From the menu, click Users, and then click Add user.
    2. On the Add user form, type a user name, for example, user, specify other user properties that you want to assign, and then click Save.
      Only the user field is mandatory.
    3. From the details page for the user, click the Credentials tab.
    4. Type and confirm a password for the user, click Reset Password, and then click Change password when prompted.
  5. Assign roles to the user.

    1. Click the Role Mappings tab.
    2. In the Available Roles field, click ReadRole and then click Add selected.
  6. Create a second user called developer, and assign a password and roles to the user.
  7. Create a data virtualization client entry.

    The client entry represents the data virtualization service as an SSO client application. .. From the menu, click Clients. .. Click Create to open the Add Client page. .. In the Client ID field, type a name for the client, for example, dv-client. .. In the Client Protocol field, choose openid-connect. .. Leave the Root URL field blank, and click Save.

You are now ready to add SSO properties to the CR for the data virtualization service.

7.3.2. Adding SSO properties to the custom resource file

After you configure Red Hat Single Sign-On to secure the OData endpoints for a virtual database, you must configure the virtual database to integrate with Red Hat Single Sign-On. To configure the virtual database to use SSO, you add SSO properties to the CR that you used when you first deployed the service (for example, dv-customer.yaml). You add the properties as environment variables. The SSO configuration takes effect after you redeploy the virtual database.

In this procedure you add the following Red Hat Single Sign-On properties to the CR:

Realm (KEYCLOAK_REALM)
The name of the realm that you created in Red Hat Single Sign-On for your virtual database.
Authentication server URL (KEYCLOAK_AUTH_SERVER_URL)
The base URL of the Red Hat Single Sign-On server. It is usually of the form https://host:port/auth.
Resource name(KEYCLOAK_RESOURCE)
The name of the client that you create in Red Hat Single Sign-On for the data virtualization service.
SSL requirement (KEYCLOAK_SSL_REQUIRED)
Specifies whether requests to the realm require SSL/TLS. You can require SSL/TLS for all requests, external requests only, or none.
Access type (KEYCLOAK_PUBLIC_CLIENT)
The OAuth application type for the client. Public access type is for client-side clients that sign in from a browser.

Prerequisites

  • You ran the Data Virtualization Operator to create a virtual database.
  • Red Hat Single Sign-On is running in the cluster where the virtual database is deployed.
  • You have the CR YAML file, for example, dv-customer.yaml that you used to deploy the virtual database.
  • You have have administrator access to the Red Hat Single Sign-On Admin Console.

Procedure

  1. Log in to the Red Hat Single Sign-On Admin Console to find the values for the required authentication properties.
  2. In a text editor, open the CR YAML file that you used to deploy your virtual database, and define authentication environment variables that are based on the values of your Red Hat Single Sign-On properties.

    For example:

    env:
      - name: KEYCLOAK_REALM
        value: master
      - name: KEYCLOAK_AUTH_SERVER_URL
        value: http://rh-sso-datavirt.openshift.example.com/auth
        - name: KEYCLOAK_RESOURCE
          value: datavirt
      - name: KEYCLOAK_SSL_REQUIRED
        value: external
      - name: KEYCLOAK_PUBLIC_CLIENT
        value: true
  3. Declare a build source dependency for the following Maven artifact for securing data virtualizations: org.teiid:spring-keycloak

    For example:

    env:
      ....
      build:
        source:
          dependencies:
            - org.teiid:spring-keycloak
  4. Save the CR.

You are now ready to define data roles in the DDL for the virtual database.

7.3.3. Defining data roles in the virtual database DDL

After you configure Red Hat Single Sign-On to integrate with data virtualization, to complete the required configuration changes, define role-based access policies in the DDL for the virtual database. Depending on how you deployed the virtual database, the DDL might be embedded in the CR file, or exist as a separate file.

You add the following information to the DDL file:

  • The name of the role. Roles that you define in the DDL must map to roles that you created earlier in Red Hat Single Sign-On.

    Tip

    For the sake of clarity, match the role names in the DDL file to the role names that you specified in Red Hat Single Sign-On. Consistent naming makes it easier to correlate how the roles that you define in each location relate to each other.

  • The database access to allow to users who are granted the specified role. For example, SELECT permissions on a particular table view.

Prerequisites

Procedure

  1. In a text editor, open the file that contains the DDL description that you used to deploy the virtual database.
  2. Insert statements to add any roles that you defined for virtual database users in Red Hat Single Sign-On. For example, to add a role with the name ReadRole add the following statement to the DDL:

    CREATE ROLE ReadRole WITH FOREIGN ROLE ReadRole;

    Add separate CREATE ROLE statements for each role that you want to implement for the virtual database.

  3. Insert statements that specify the level of access that users with the role have to database objects. For example,

    GRANT SELECT ON TABLE "portfolio.CustomerZip" TO ReadRole

    Add separate GRANT statements for each role that you want to implement for the virtual database.

  4. Save and close the CR or DDL file.

    You are now ready to redeploy the virtual database. For information about how to run the Data Virtualization Operator to deploy the virtual database, see Chapter 6, Running the Data Virtualization Operator to deploy a virtual database.

    After you redeploy the virtual database, add a redirect URL in the Red Hat Single Sign-On Admin Console. For more information, see Section 7.3.4, “Adding a redirect URI for the data virtualization client in the Red Hat Single Sign-On Admin Console”.

7.3.4. Adding a redirect URI for the data virtualization client in the Red Hat Single Sign-On Admin Console

After you enable SSO for your virtual database and redeploy it, specify a redirect URI for the data virtualization client that you created in Section 7.3.1, “Configuring Red Hat Single Sign-On to secure OData”.

Redirect URIs, or callback URLs are required for public clients, such as OData clients that use OpenID Connect to authenticate, and communicate with an identity provider through the redirect mechanism.

For more information about adding redirect URIs for OIDC clients, see the NameOfRHSSOServerAdmin.

Prerequisites

  • You enabled SSO for a virtual database and used the Data Virtualization Operator to redeploy it.
  • You have administrator access to the Red Hat Single Sign-On Admin Console.

Procedure

  1. From a browser, sign in to the Red Hat Single Sign-On Admin Console.
  2. From the security realm where you created the client for the data virtualization service, click Clients in the menu, and then click the ID of the data virtualization client that you created previously (for example, dv-client).
  3. In the Valid Redirect URIs field, type the root URL for the OData service and append an asterisk to it. For example, http://datavirt.odata.example.com/*
  4. Test whether Red Hat Single Sign-On intercepts calls to the OData API.

    1. From a browser, type the address of an OData endpoint, for example:

      http://datavirt.odata.example.com/odata/CustomerZip

      A login page prompts you to provide credentials.

  5. Sign in with the credentials of an authorized user.

    Your view of the data depends on the role of the account that you use to sign in.

Note

Some endpoints, such as odata/$metadata are excluded from security filtering so that they can be discovered by other services.