Language and Page Formatting Options

Quick Start Guide

Red Hat OpenShift Database Access 1

Service Preview Release

Red Hat Data Services Documentation Team

Abstract

This Quick Start Guide helps administrators and developers with accessing the OpenShift Database Access managed service. By using this managed service they can import, and connect to external cloud-hosted database providers. Also, they can connect database instances to their applications, and create access policies by using this managed service.
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message

Preface

Important

The Red Hat OpenShift Database Access is a Service Preview release add-on that enables:

  • Easy consumption of database-as-a-service (DBaaS) offerings from partners including MongoDB Atlas, Crunchy Bridge, CockroachDB, and Amazon’s Relational Database Services (RDS) directly from managed OpenShift clusters.
  • Easy management, monitoring, and control by administrators of cloud-hosted DBaaS including consumption, usage, and status.

Red Hat OpenShift Database Access (OpenShift Database Access) is a Service Preview release. A Service Preview release has features that are early in development. Service Preview releases are not production ready and might have features and functionality that are not fully tested. Red Hat advises you not to use OpenShift Database Access for production or business-critical workloads.

To give feedback or inform our engineering team of any technical issues with OpenShift Database Access, please use rhoda-requests@redhat.com.

Chapter 1. Installing the Red Hat OpenShift Database Access add-on

The Red Hat OpenShift Database Access add-on allows you to configure a connection to cloud-database providers, create new database instances, and connect database instances to applications for developers to use.

Prerequisites

  • A Red Hat user account to access the Red Hat Hybrid Cloud Console.
  • OpenShift Container Platform (OCP) 4.10 or higher.

Procedure

  1. Log into the Red Hat Hybrid Cloud Console with your credentials.
  2. Click OpenShift from the navigation menu.
  3. Click Clusters to display a list of your clusters. Select a cluster name from the list to add database access to.
  4. Click Add-ons, and select the Red Hat OpenShift Database Access tile.
  5. Click Install.
  6. Wait for the installation process to finish. Once the add-on installation completes successfully, a green checkmark appears on the tile.

Additional resources

  • For additional information about Red Hat OpenShift Database Access, see the Reference Guide.

Chapter 2. Accessing the Database Access menu for configuring and monitoring

From the OpenShift console, you can access the OpenShift Database Access navigation menu. On the Database Access page, use the appropriate project namespace for importing a cloud-database provider account.

Important

By using MongoDB Atlas as a cloud-database provider, you must add the IP address of the application pod to MongoDB Atlas' IP Access List. If the IP address is not in the IP Access List, then a 504 gateway timeout error occurs. Visit the MongoDB Atlas website for more details on adding an IP address to your database project.

Note

After creating a DBaaSPolicy as a non-administrative user, on the Operator details page, under the Provider Account Policy section, you will see a restricted access message. By selecting Current namespace only option, you can view the policies.

Prerequisites

Procedure

  1. Log into the OpenShift console.
  2. To select the correct project namespace follow these sub-steps.

    Single page screenshot of the administrator’s entry point
    1. Select the Administrator perspective First callout .
    2. Expand the Data Services navigation menu, and click Database Access Second callout .

      Note

      You might need to scroll down the navigation menu.

    3. Click the Project dropdown menu and then enable the Show default projects switch Third callout .
    4. Type dbaas in the search field.
    5. Select redhat-dbaas-operator or openshift-dbaas-operator project namespace Fourth callout .

      From the database inventory page you get a snapshot of the database environment. You can import a cloud-hosted database provider account, and create a new database instance.

      Database inventory landing page

Additional Resources

  • For additional information about Red Hat OpenShift Database Access, see the Reference Guide.

Chapter 3. Accessing the developer workspace and adding a database instance

You can access the developer workspace in the OpenShift console to manage connectivity for database instances to applications.

Prerequisites

  • Installation of the OpenShift Database Access add-on.
  • Import at least one cloud-database provider account.

Procedure

  1. Log into the OpenShift console.
  2. Access the developer workspace to select an existing project or create a new project, and choose a cloud-hosted database provider to add to your project:

    Single page screenshot of the developer’s entry point
    1. Select the Developer perspective First callout .
    2. Click +Add Second callout .
    3. Click the Project dropdown menu Third callout .
    4. Create a new project or search for your application’s project Fourth callout .
    5. Click the Cloud-Hosted Databases tile to connect to a cloud-database provider Fifth callout .
  3. Click on your cloud-hosted database provider’s tile.
  4. Click Add to Topology.
  5. Select a previously configured Provider Account for this database instance from the dropdown menu.
  6. Select the database instance ID you want to use, and click Add to Topology.
  7. Click Continue. Upon a successful connection, you go to the Topology page.

Additional resources

  • For additional information about Red Hat OpenShift Database Access, see the Reference Guide.

Chapter 4. Connecting an application to a database instance using the topology view

You can add a database to an application by making a connection to the database instance from the cloud-database provider. On the Topology page, you see the application, along with the new database instance.

Procedure

  1. When hovering the cursor over the deployment node, you can drag the arrow from the application to the new database instance to create a binding connector. You can also right-click on the deployment node, and click Create Service Binding to create a binding connector.

    The topology view of the application and the database with a dotted line arrow indicating database binding in the process of being dragged from the database to the application
  2. On the pop-up dialog, click Create. Once the binding is created, the application pod restarts. After the application pod restarts, your application now has database connectivity.

    The topology view of the application and the database with a solid line arrow indicating database binding to the application is complete

    This binding visually represents the injection of database connection information and credentials into the application pod.

  3. Use a service binding library based on your application’s framework to consume the service binding information and credentials.

Additional Resources

  • See the Red Hat OpenShift Database Access Reference Guide for more details on service bindings, and working application examples of service binding libraries.

Appendix A. Find your MongoDB Atlas account credentials

You need the Organization ID, the Organization Public Key, and the Organization Private Key to create a provider account resource for MongoDB Atlas.

Important

If using MongoDB Atlas as a cloud-database provider, then you must add the IP address of the application pod to MongoDB Atlas' IP Access List. If the IP address is not in the IP Access List, then a 504 gateway timeout error occurs. Visit the MongoDB Atlas website for more details on adding an IP address to your database project.

Procedure

  1. From the MongoDB Atlas home page, Sign In to your account.
  2. From your account home page:

    Single screenshot for finding your Organization ID value
    1. Select Organization from the dropdown menu First callout .
    2. Click Settings from the Organization navigation menu Second callout .
    3. Copy the Organization ID value Third callout .

      Note

      In some cases your organization ID may be hidden by default.

  3. Next, from the account home page:

    Single screenshot for finding your API keys
    1. Click Access Manager from the Organization navigation menu First callout .
    2. Click API Keys Second callout .
    3. If you have existing API keys, you can find them listed here. Copy the API public and private keys for the import provider account fields. Also, verify that your API keys have the Organization Owner and Organization Member permissions Third callout Fourth callout .
  4. If you need new API keys, click Create API Key, and proceed to the next step.
  5. On the Create API Key page, enter a Description, and under the Organization Permissions dropdown box select the Organization Owner and Organization Member permissions. Click Next.
  6. Copy the API public and private keys for the import provider account fields.

Appendix B. Find your Crunchy Data Bridge account credentials

You need the Public API Key, and the Private API Secret to create a provider account resource for Crunchy Data Bridge.

Procedure

  1. From the Crunch Data Bridge Log in page, sign in to your account.
  2. From your personal account home page, click Settings, and then click Settings from the navigation menu.

    Crunchy Data Bridge settings on the navigation menu
  3. Copy the Application ID and Application Secret values for the import provider account fields.

    Crunchy Data Bridge API key and secret values

Appendix C. Find your CockroachDB account credentials

You need the API Key to create a provider account resource for CockroachDB.

Important

Currently, access to the Service Accounts tab on the Access Management page is enabled by invite only from CockroachDB. To expose the Service Accounts tab on the Access Management page, you can request that this feature be enabled. Contact CockroachDB support and ask for the Cloud API to be enabled in the CockroachDB Cloud Console for your user account.

Additionally, you can view this quick video tutorial from Cockroach Labs on creating an account.

Procedure

  1. From the CockroachDB service account page, log in to your account.
  2. From your service account home page, select Access from the navigation menu.
  3. Click Service Accounts from the Access Management page.
  4. Click Create Service Account.
  5. Enter an Account name, select the Permissions, and click Create.

    Step 1 for creating a service account
  6. Enter an API key name, and click Create.

    Step 2 for creating a service account
  7. Copy the Secret key for the import provider account field, and click Done.

    Step 3 for creating a service account

Appendix D. Find your Amazon RDS account credentials

You need an Amazon Web Services (AWS) Access key ID, an AWS Secret access key, and know which AWS Region you are using to import an Amazon Relational Database Service (RDS) provider account for OpenShift Database Access. If you lose your AWS Access key ID, and your AWS Secret access key, new ones must be created.

Note

Amazon only allows two secret access keys for each user. You might need to deactivate unused keys, or delete lost keys before you can create a new access key.

Important

You are limited to one Amazon RDS provider account per OpenShift cluster. Using your AWS credentials on more than one OpenShift cluster breaks established connections on all OpenShift clusters, except for the last OpenShift cluster that established a connection.

Important

OpenShift Database Access only supports RDS database instance deployments, and does not support database cluster deployments.

Important

Database instances using a custom Oracle or custom SQL Server engine type are not supported.

Prerequisites

Procedure

  1. Sign in to Amazon’s Identity and Access Management (IAM) console with your AWS user account.
  2. From the IAM console home page, expand the Access management menu, and click Users.
  3. Select a user from the list.
  4. On the user’s summary page, select the Security credentials tab, and click the Create access key button.
  5. Copy the AWS Access key ID, and the AWS Secret access key.

Appendix E. Manual installation of OpenShift Database Access on Azure Red Hat OpenShift

As a systems administrator, you can manually install the OpenShift Database Access add-on to a Red Hat OpenShift cluster running on Microsoft Azure.

E.1. Prerequisites

E.2. Creating an Azure Red Hat OpenShift cluster

You can manually create Red Hat OpenShift clusters running on Microsoft’s Azure cloud computing service. Once an Azure Red Hat OpenShift (ARO) cluster is running you can register the ARO cluster with the Red Hat Hybrid Cloud Console.

Important

Currently, creating an Azure Red Hat OpenShift cluster is not supported using the Red Hat Hybrid Cloud Console.

Prerequisites

  • Installation of the Azure client, version 2.6 or higher.
  • A Microsoft Azure organizational or user account with an active subscription.
  • Download a pull secret for authentication to an OpenShift cluster.

Procedure

  1. Login to the Microsoft Azure portal from the command-line client:

    $ az login
    Note

    Running this command opens a web browser for you to finish the login process. You can close the browser after you successfully logged in.

  2. Set the account subscription:

    Syntax

    az account set --subscription ‘SUBSCRIPTION_NAME

    Example

    $ az account set –-subscription ‘Example Sub’

  3. Register the Azure resources:

    Example

    $ az provider register -n Microsoft.RedHatOpenShift --wait
    $ az provider register -n Microsoft.Compute --wait
    $ az provider register -n Microsoft.Storage --wait

  4. Create a resource group:

    Syntax

    az group create --name RESOURCE_GROUP --location LOCATION

    Example

    $ az group create --name rhoda-aro-gr --location eastus

  5. Create a virtual network:

    Syntax

    az network vnet create --resource-group RESOURCE_GROUP \
    --name aro-vnet \
    --address-prefixes IP_SUBNET/CIDR

    Example

    $ az network vnet create --resource-group rhoda-aro-gr \
    --name aro-vnet \
    --address-prefixes 10.0.0.0/22

  6. Create a subnet for the main node:

    Syntax

    az network vnet subnet create --resource-group RESOURCE_GROUP \
    --vnet-name aro-vnet \
    --name main-subnet \
    --address-prefixes IP_SUBNET/CIDR \
    --service-endpoints Microsoft.ContainerRegistry

    Example

    $ az network vnet subnet create --resource-group rhoda-aro-gr \
    --vnet-name aro-vnet \
    --name main-subnet \
    --address-prefixes 10.0.0.0/23 \
    --service-endpoints Microsoft.ContainerRegistry

  7. Create a subnet for the worker node:

    Syntax

    az network vnet subnet create --resource-group RESOURCE_GROUP \
    --vnet-name aro-vnet \
    --name worker-subnet \
    --address-prefixes IP_SUBNET/CIDR \
    --service-endpoints Microsoft.ContainerRegistry

    Example

    $ az network vnet subnet create --resource-group rhoda-aro-gr \
    --vnet-name aro-vnet \
    --name worker-subnet \
    --address-prefixes 10.0.2.0/23 \
    --service-endpoints Microsoft.ContainerRegistry

  8. Disable private endpoint policies for the main subnet:

    Syntax

    az network vnet subnet update --name main-subnet \
    --resource-group RESOURCE_GROUP \
    --vnet-name aro-vnet \
    --disable-private-link-service-network-policies true

    Example

    $ az network vnet subnet update --name main-subnet \
    --resource-group rhoda-aro-gr \
    --vnet-name aro-vnet \
    --disable-private-link-service-network-policies true

  9. Create the ARO cluster:

    Syntax

    az aro create --resource-group RESOURCE_GROUP \
    --name CLUSTER_NAME \
    --vnet aro-vnet \
    --master-subnet main-subnet \
    --worker-subnet worker-subnet \
    --apiserver-visibility Public \
    --ingress-visibility Public \
    --pull-secret @DOWNLOADED_PULL_SECRET_FILE_PWD

    Example

    $ az aro create --resource-group rhoda-aro-gr \
    --name rhoda-aro-example \
    --vnet aro-vnet \
    --master-subnet main-subnet \
    --worker-subnet worker-subnet \
    --apiserver-visibility Public \
    --ingress-visibility Public \
    --pull-secret @pull-secret.txt

    Note

    The cluster creation process can take up to an hour to complete.

Verification

  1. Once the cluster creation process finishes, copy the kubeadmin credentials, and the OpenShift console URL.

    1. Find the kubeadmin credentials:

      Syntax

      az aro list-credentials --name CLUSTER_NAME \
      --resource-group RESOURCE_GROUP

      Example

      $ az aro list-credentials --name rhoda-aro-example \
      --resource-group rhoda-aro-gr
      
      {
      "kubeadminPassword": "AAFAA-Zk3aR-V46bu-A4F7D",
      "kubeadminUsername": "kubeadmin"
      }

    2. Get the OpenShift console URL:

      Syntax

      az aro show --name CLUSTER_NAME \
      --resource-group RESOURCE_GROUP \
      --query "consoleProfile.url" -o tsv

      Example

      $ az aro show --name rhoda-aro-example \
      --resource-group rhoda-aro-gr \
      --query "consoleProfile.url" -o tsv
      
      https://console-openshift-console.apps.b879bjix.eastus.example.com/

  2. Use the kubeadmin credentials to login to the OpenShift console.
  3. From the OpenShift console home page, select the Administrator perspective, expand Home on the navigation menu, and click Overview.

    Copy the Cluster ID value. This value is used to register the ARO cluster with the Red Hat Hybrid Cloud Console.

E.3. Registering an Azure Red Hat OpenShift cluster with the Red Hat Hybrid Cloud Console

You have to manually register Azure Red Hat OpenShift (ARO) clusters with the Red Hat Hybrid Cloud Console for them to start sending telemetry data about the ARO cluster.

Prerequisites

  • A running ARO cluster.
  • Download a pull secret for authentication to an OpenShift cluster, and the Red Hat Hybrid Cloud Console.
  • A Red Hat user account for access to the Red Hat Hybrid Cloud Console.

Procedure

  1. Log into OpenShift using the a command-line interface:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT

    Example

    $ oc login \ --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC \ --server=https://example.com:6443

    Note

    You can find your command-line login token and URL from the OpenShift console. Once you log into the OpenShift console, click your user name, click Copy login command, login once again using your user name and password, then click Display Token to view the command.

  2. From the OpenShift host, get and encode the pull secret file:

    Example

    $ oc get secrets pull-secret -n openshift-config \
    -o template='{{index .data ".dockerconfigjson"}}' | \
    base64 -d > pull-secret-post.json

  3. Open the downloaded pull secret file, and copy the whole cloud.redhat.com authorization entry.

    Copy Example

    "cloud.openshift.com":{
        "auth":"<_REDACTED_>",
        "email":"user@example.com"
    },

    Close the pull secret file.

  4. Open the pull-secret-post.json file, and paste the copied cloud.redhat.com authorization entry. Close the pull secret file.
  5. Validate the JSON file:

    Example

    $ cat pull-secret-post.json | jq

  6. Update the ARO cluster with the new pull secret file:

    Example

    $ oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=./pull-secret-post.json

    After a few minutes the ARO cluster appears in the Red Hat Hybrid Cloud Console.

Verification

  1. Login to the Red Hat Hybrid Cloud Console.
  2. Click OpenShift, then click Cluster.
  3. From the Cluster page, filter for the ARO cluster by its cluster ID. The new ARO cluster is now in the cluster list.

    Note

    The default service-level agreement (SLA) is set to Self-Support 60-day evaluation.

  4. Optionally, you can change the default SLA:

    1. Click the cluster ID.
    2. Click the Edit subscription settings link within the warning message.
    3. Select the appropriate SLA for your subscription, and click Save.

E.4. Installing OpenShift Database Access using the Operator Lifecycle Manager

For some OpenShift cluster types, such as Azure Red Hat OpenShift (ARO), you must install Red Hat OpenShift Database Access (RHODA) manually by using the Operator Lifecycle Manager (OLM).

Prerequisites

  • A running OpenShift Dedicate (OSD) or ARO cluster.
  • Administrator privileges to the ARO cluster.

Procedure

  1. Log into OpenShift by using the a command-line interface:

    Syntax

    oc login --token=TOKEN --server=SERVER_URL_AND_PORT

    Example

    $ oc login \ --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC \ --server=https://example.com:6443

    Note

    You can find your command-line login token and URL from the OpenShift console. Log in to the OpenShift console. Click your user name, and click Copy login command. Offer your user name and password again, and click Display Token to view the command.

  2. Create an OpenShift Database Access catalog source using the latest add-on image repository:

    Example

    $ cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: dbaas-operator
      namespace: openshift-marketplace
    spec:
      sourceType: grpc
      image: quay.io/osd-addons/dbaas-operator-index@sha256:83649d689a88763cd2c9e22090af4c9d006deed2d2a6f4ed0a9373811734b1b9
      displayName: DBaaS Operator
    EOF

  3. Verify the catalog source is added, and in a ready state:

    Example

    $ oc get catalogsource dbaas-operator \
    -n openshift-marketplace \
    -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'

    Important

    Wait until the catalog source is in a READY state, before proceeding to the next step.

  4. Login to the OpenShift console with a user that has administrative privileges.
  5. In the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
  6. In the filter field, type database access, and click the OpenShift Database Access Operator tile.
  7. Click the Install button to show the operator details.
  8. The default and recommended namespace for the OpenShift Database Access operator is openshift-dbaas-operator, click Install on the Install Operator page.

    Note

    All dependencies are automatically installed, this includes the provider account operators, and the quick-start guides.

Verification

  1. Once the OpenShift Database Access operator successfully installs, a new navigation menu item is added, called Data Services. Expand the Data Services menu. This might take a few minutes to refresh the navigation menu.
  2. Click Database Access.
  3. On the Database Access home page you see an empty inventory table.

    Database inventory landing page

Additional resources

  • For additional information about Red Hat OpenShift Database Access, see the Reference Guide.

Legal Notice

Copyright © 2022 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.