Chapter 1. Installing 3scale on OpenShift

This section walks you through steps to deploy Red Hat 3scale API Management 2.8 on OpenShift.

The Red Hat 3scale API Management solution for on-premises deployment is composed of:

  • Two API gateways: embedded APIcast
  • One 3scale Admin Portal and Developer Portal with persistent storage

There are two ways to deploy a 3scale solution:

Note

Whether deploying 3scale using the operator or via templates, you must first configure registry authentication to the Red Hat container registry. See Authenticating with registry.redhat.io for container images.

Prerequisites

  • You must configure 3scale servers for UTC (Coordinated Universal Time).

To install 3scale on OpenShift, perform the steps outlined in the following sections:

1.1. System requirements for installing 3scale on OpenShift

This section lists the requirements for the 3scale - OpenShift template.

1.1.1. Environment requirements

Red Hat 3scale API Management requires an environment specified in supported configurations.

1.1.1.1. Using local filesystem storage

Persistent volumes:

  • 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence
  • 1 RWX (ReadWriteMany) persistent volume for Developer Portal content and System-app Assets

Configure the RWX persistent volume to be group writable. For a list of persistent volume types that support the required access modes, see the OpenShift documentation.

1.1.1.2. Using Amazon Simple Storage Service (Amazon S3) storage

Persistent volumes:

  • 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence

Storage:

  • 1 Amazon S3 bucket

1.1.2. Hardware requirements

Hardware requirements depend on your usage needs. Red Hat recommends that you test and configure your environment to meet your specific requirements. The following are the recommendations when configuring your environment for 3scale on OpenShift:

  • Compute optimized nodes for deployments on cloud environments (AWS c4.2xlarge or Azure Standard_F8).
  • Very large installations may require a separate node (AWS M4 series or Azure Av2 series) for Redis if memory requirements exceed your current node’s available RAM.
  • Separate nodes between routing and compute tasks.
  • Dedicated computing nodes for 3scale specific tasks.
  • Set the PUMA_WORKERS variable of the back-end listener to the number of cores in your compute node.

1.2. Configuring nodes and entitlements

Before deploying 3scale on OpenShift, you must configure the necessary nodes and the entitlements for the environment to fetch images from the Red Hat Container Registry. Perform the following steps to configure the nodes and entitlements:

Procedure

  1. Install Red Hat Enterprise Linux (RHEL) on each of your nodes.
  2. Register your nodes with Red Hat using the Red Hat Subscription Manager (RHSM), via the interface or the command line.
  3. Attach your nodes to your 3scale subscription using RHSM.
  4. Install OpenShift on your nodes, complying with the following requirements:

  5. Install the OpenShift command line interface.
  6. Enable access to the rhel-7-server-3scale-amp-2-rpms repository using the subscription manager:

    sudo subscription-manager repos --enable=rhel-7-server-3scale-amp-2-rpms
  7. Install the 3scale template called 3scale-amp-template. This will be saved at /opt/amp/templates.

    sudo yum install 3scale-amp-template

1.2.1. Configuring Amazon Simple Storage Service

Important

Skip this section, if you are deploying 3scale with the local filesystem storage.

If you want to use an Amazon Simple Storage Service (Amazon S3) bucket as storage, you must configure your bucket before you can deploy 3scale on OpenShift.

Perform the following steps to configure your Amazon S3 bucket for 3scale:

  1. Create an Identity and Access Management (IAM) policy with the following minimum permissions:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "s3:ListAllMyBuckets",
                "Resource": "arn:aws:s3:::*"
            },
            {
                "Effect": "Allow",
                "Action": "s3:*",
                "Resource": [
                    "arn:aws:s3:::targetBucketName",
                    "arn:aws:s3:::targetBucketName/*"
                ]
            }
        ]
    }
  2. Create a CORS configuration with the following rules:

    <?xml version="1.0" encoding="UTF-8"?>
    <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>https://*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
    </CORSRule>
    </CORSConfiguration>

1.3. Deploying 3scale on OpenShift using a template

Note

OpenShift Container Platform (OCP) 4.x supports deployment of 3scale using the operator only. See Deploying 3scale using the operator.

Prerequisites

  • An OpenShift cluster configured as specified in the Configuring nodes and entitlements section.
  • A domain that resolves to your OpenShift cluster.
  • Access to the Red Hat container catalog.
  • (Optional) An Amazon Simple Storage Service (Amazon S3) bucket for content management system (CMS) storage outside of the local filesystem.
  • (Optional) A deployment with PostgreSQL.

    • This is the same as the default deployment on Openshift, however it uses PostgreSQL as an internal system database.
  • (Optional) A working SMTP server for email functionality.
Note

Deploying 3scale on OpenShift using a template is based on OpenShift Container Platform 3.11

Follow these procedures to install 3scale on OpenShift using a .yml template:

1.4. Configuring container registry authentication

As a 3scale administrator, configure authentication with registry.redhat.io before you deploy 3scale container images on OpenShift.

Prerequisites

  • Cluster administrator access to an OpenShift Container Platform cluster.
  • OpenShift oc client tool is installed. For more details, see the OpenShift CLI documentation.

Procedure

  1. Log into your OpenShift cluster as administrator:

    $ oc login -u system:admin
  2. Open the project in which you want to deploy 3scale:

    $ oc project myproject
  3. Create a docker-registry secret using your Red Hat Customer Portal account, replacing threescale-registry-auth with the secret to create:

    $ oc create secret docker-registry threescale-registry-auth \
      --docker-server=registry.redhat.io \
      --docker-username=CUSTOMER_PORTAL_USERNAME \
      --docker-password=CUSTOMER_PORTAL_PASSWORD \
      --docker-email=EMAIL_ADDRESS

    You will see the following output:

    secret/threescale-registry-auth created
  4. Link the secret to your service account to use the secret for pulling images. The service account name must match the name that the OpenShift pod uses. This example uses the default service account:

    $ oc secrets link default threescale-registry-auth --for=pull
  5. Link the secret to the builder service account to use the secret for pushing and pulling build images:

    $ oc secrets link builder threescale-registry-auth

Additional resources

For more details on authenticating with Red Hat for container images:

1.4.1. Creating registry service accounts

To use container images from registry.redhat.io in a shared environment with 3scale 2.8 deployed on OpenShift, you must use a Registry Service Account instead of an individual user’s Customer Portal credentials.

Note

It is a requirement for 3scale 2.8 that you follow the steps outlined below before deploying either on OpenShift using a template or via the operator, as both options use registry authentication.

Procedure

  1. Navigate to the Registry Service Accounts page and log in.
  2. Click New Service Account.
  3. Fill in the form on the Create a New Registry Service Account page.

    1. Add a name for the service account.

      Note: You will see a fixed-length, randomly generated number string before the form field.

  4. Enter a Description.
  5. Click Create.
  6. Navigate back to your Service Accounts.
  7. Click the Service Account you created.
  8. Make a note of the username, including the prefix string, for example 12345678|username, and your password.

    1. This username and password will be used to log in to registry.redhat.io.
Note

There are tabs available on the Token Information page that show you how to use the authentication token. For example, the Token Information tab shows the username in the format 12345678|username and the password string below it.

1.4.2. Modifying registry service accounts

Service accounts can be modified or deleted. This can done from the Registry Service Account page using the pop-up menu to the right of each authentication token in the table.

Warning

The regeneration or removal of service accounts will impact systems that are using the token to authenticate and retrieve content from registry.redhat.io.

A description for each function is as follows:

  • Regenerate token: Allows an authorized user to reset the password associated with the Service Account.

    Note: The username for the Service Account cannot be changed.

  • Update Description: Allows an authorized user to update the description for the Service Account.
  • Delete Account: Allows an authorized user to remove the Service Account.

1.4.3. Importing the 3scale template

Note
  • Wildcard routes have been removed as of 3scale 2.6.

    • This functionality is handled by Zync in the background.
  • When API providers are created, updated, or deleted, routes automatically reflect those changes.

Perform the following steps to import the 3scale template into your OpenShift cluster:

Procedure

  1. From a terminal session log in to OpenShift as the cluster administrator:

    oc login
  2. Select your project, or create a new project:

    oc project <project_name>
    oc new-project <project_name>
  3. Enter the oc new-app command:

    1. Specify the --file option with the path to the amp.yml file you downloaded as part of Configuring nodes and entitlements.
    2. Specify the --param option with the WILDCARD_DOMAIN parameter set to the domain of your OpenShift cluster:

      oc new-app --file /opt/amp/templates/amp.yml --param WILDCARD_DOMAIN=<WILDCARD_DOMAIN>

      The terminal shows the master and tenant URLs and credentials for your newly created 3scale Admin Portal. This output should include the following information:

      • master admin username
      • master password
      • master token information
      • tenant username
      • tenant password
      • tenant token information
  4. Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.

    * With parameters:
    
     * ADMIN_PASSWORD=xXxXyz123 # generated
     * ADMIN_USERNAME=admin
     * TENANT_NAME=user
    
     * MASTER_NAME=master
     * MASTER_USER=master
     * MASTER_PASSWORD=xXxXyz123 # generated
    
    --> Success
    Access your application via route 'user-admin.3scale-project.example.com'
    Access your application via route 'master-admin.3scale-project.example.com'
    Access your application via route 'backend-user.3scale-project.example.com'
    Access your application via route 'user.3scale-project.example.com'
    Access your application via route 'api-user-apicast-staging.3scale-project.example.com'
    Access your application via route 'api-user-apicast-production.3scale-project.example.com'
  5. Make a note of these details for future reference.
  6. The 3scale deployment on OpenShift has been successful when the following command returns:

    oc wait --for=condition=available --timeout=-1s $(oc get dc --output=name)
    Note

    When the 3scale deployment on OpenShift has been successful, your login credentials will work.

1.4.4. Getting the Admin Portal URL

When you deploy 3scale using the template, a default tenant is created, with a fixed URL: 3scale-admin.${wildcardDomain}

The 3scale Dashboard shows the new portal URL of the tenant. As an example, if the <wildCardDomain> is 3scale-project.example.com, the Admin Portal URL is: https://3scale-admin.3scale-project.example.com.

The wildcardDomain is the <wildCardDomain> parameter you provided during installation. Open this unique URL in a browser using the this command:

xdg-open https://3scale-admin.3scale-project.example.com

Optionally, you can create new tenants on the MASTER portal URL: `master.${wildcardDomain}

1.4.5. Deploying 3scale with Amazon Simple Storage Service

Deploying 3scale with Amazon Simple Storage Service (Amazon S3) is an optional procedure. Deploy 3scale with Amazon S3 using the following steps:

Procedure

  1. Download amp-s3.yml.
  2. Log in to OpenShift from a terminal session :

    oc login
  3. Select your project, or create a new project:

    oc project <project_name>

    OR

oc new-project <project_name>
  1. Enter the oc new-app command:

    • Specify the --file option with the path to the amp-s3.yml file.
    • Specify the --param options with the following values:
    • WILDCARD_DOMAIN: the parameter set to the domain of your OpenShift cluster.
    • AWS_BUCKET: with your target bucket name.
    • AWS_ACCESS_KEY_ID: with your AWS credentials ID.
    • AWS_SECRET_ACCESS_KEY: with your AWS credentials KEY.
    • AWS_REGION: with the AWS: region of your bucket.
    • AWS_HOSTNAME: Default: Amazon endpoints - AWS S3 compatible provider endpoint hostname.
    • AWS_PROTOCOL: Default: HTTPS - AWS S3 compatible provider endpoint protocol.
    • AWS_PATH_STYLE: Default: false - When set to true, the bucket name is always left in the request URI and never moved to the host as a sub-domain.
    • Optionally, specify the --param option with the TENANT_NAME parameter to set a custom name for the Admin Portal. If omitted, this defaults to 3scale

      oc new-app --file /path/to/amp-s3.yml \
      	--param WILDCARD_DOMAIN=<a-domain-that-resolves-to-your-ocp-cluster.com> \
      	--param TENANT_NAME=3scale \
      	--param AWS_ACCESS_KEY_ID=<your-aws-access-key-id> \
      	--param AWS_SECRET_ACCESS_KEY=<your-aws-access-key-secret> \
      	--param AWS_BUCKET=<your-target-bucket-name> \
      	--param AWS_REGION=<your-aws-bucket-region> \
      	--param FILE_UPLOAD_STORAGE=s3

      The terminal shows the master and tenant URLs, as well as credentials for your newly created 3scale Admin Portal. This output should include the following information:

    • master admin username
    • master password
    • master token information
    • tenant username
    • tenant password
    • tenant token information
  2. Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.

    ...
    
    * With parameters:
     * ADMIN_PASSWORD=xXxXyz123 # generated
     * ADMIN_USERNAME=admin
     * TENANT_NAME=user
     ...
    
     * MASTER_NAME=master
     * MASTER_USER=master
     * MASTER_PASSWORD=xXxXyz123 # generated
     ...
    
    --> Success
    Access your application via route 'user-admin.3scale-project.example.com'
    Access your application via route 'master-admin.3scale-project.example.com'
    Access your application via route 'backend-user.3scale-project.example.com'
    Access your application via route 'user.3scale-project.example.com'
    Access your application via route 'api-user-apicast-staging.3scale-project.example.com'
    Access your application via route 'api-user-apicast-production.3scale-project.example.com'
    Access your application via route 'apicast-wildcard.3scale-project.example.com'
    
    ...
  3. Make a note of these details for future reference.
  4. The 3scale deployment on OpenShift has been successful when the following command returns:

    oc wait --for=condition=available --timeout=-1s $(oc get dc --output=name)
    Note

    When the 3scale deployment on OpenShift has been successful, your login credentials will work.

1.4.6. Deploying 3scale with PostgreSQL

Deploying 3scale with PostgreSQL is an optional procedure. Deploy 3scale with PostgreSQL using the following steps:

Procedure

  1. Download amp-postgresql.yml.
  2. Log in to OpenShift from a terminal session :

    oc login
  3. Select your project, or create a new project:

    oc project <project_name>

    OR

oc new-project <project_name>
  1. Enter the oc new-app command:

    • Specify the --file option with the path to the amp-postgresql.yml file.
    • Specify the --param options with the following values:
    • WILDCARD_DOMAIN: the parameter set to the domain of your OpenShift cluster.
    • Optionally, specify the --param option with the TENANT_NAME parameter to set a custom name for the Admin Portal. If omitted, this defaults to 3scale

      oc new-app --file /path/to/amp-postgresql.yml \
      	--param WILDCARD_DOMAIN=<a-domain-that-resolves-to-your-ocp-cluster.com> \
      	--param TENANT_NAME=3scale \

      The terminal shows the master and tenant URLs, as well as the credentials for your newly created 3scale Admin Portal. This output should include the following information:

    • master admin username
    • master password
    • master token information
    • tenant username
    • tenant password
    • tenant token information
  2. Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.

    ...
    
    * With parameters:
     * ADMIN_PASSWORD=xXxXyz123 # generated
     * ADMIN_USERNAME=admin
     * TENANT_NAME=user
     ...
    
     * MASTER_NAME=master
     * MASTER_USER=master
     * MASTER_PASSWORD=xXxXyz123 # generated
     ...
    
    --> Success
    Access your application via route 'user-admin.3scale-project.example.com'
    Access your application via route 'master-admin.3scale-project.example.com'
    Access your application via route 'backend-user.3scale-project.example.com'
    Access your application via route 'user.3scale-project.example.com'
    Access your application via route 'api-user-apicast-staging.3scale-project.example.com'
    Access your application via route 'api-user-apicast-production.3scale-project.example.com'
    Access your application via route 'apicast-wildcard.3scale-project.example.com'
    
    ...
  3. Make a note of these details for future reference.
  4. The 3scale deployment on OpenShift has been successful when the following command returns:

    oc wait --for=condition=available --timeout=-1s $(oc get dc --output=name)
    Note

    When the 3scale deployment on OpenShift has been successful, your login and credentials will work.

1.4.7. Configuring SMTP variables (optional)

OpenShift uses email to send notifications and invite new users. If you intend to use these features, you must provide your own SMTP server and configure SMTP variables in the system-smtp secret.

Perform the following steps to configure the SMTP variables in the system-smtp secret:

Procedure

  1. If you are not already logged in, log in to OpenShift:

    oc login
    1. Using the oc patch command, specify the secret type where system-smtp is the name of the secret, followed by the -p option, and write the new values in JSON for the following variables:

      VariableDescription

      address

      Allows you to specify a remote mail server as a relay

      username

      Specify your mail server username

      password

      Specify your mail server password

      domain

      Specify a HELO domain

      port

      Specify the port on which the mail server is listening for new connections

      authentication

      Specify the authentication type of your mail server. Allowed values: plain (sends the password in the clear), login (send password Base64 encoded), or cram_md5 (exchange information and a cryptographic Message Digest 5 algorithm to hash important information)

      openssl.verify.mode

      Specify how OpenSSL checks certificates when using TLS. Allowed values: none or peer.

      Example

      oc patch secret system-smtp -p '{"stringData":{"address":"<your_address>"}}'
      oc patch secret system-smtp -p '{"stringData":{"username":"<your_username>"}}'
      oc patch secret system-smtp -p '{"stringData":{"password":"<your_password>"}}'
  2. After you have set the secret variables, redeploy the system-app and system-sidekiq pods:

    oc rollout latest dc/system-app
    oc rollout latest dc/system-sidekiq
  3. Check the status of the rollout to ensure it has finished:

    oc rollout status dc/system-app
    oc rollout status dc/system-sidekiq

1.5. Parameters of the 3scale template

Template parameters configure environment variables of the 3scale (amp.yml) template during and after deployment.

NameDescriptionDefault ValueRequired?

APP_LABEL

Used for object app labels

3scale-api-management

yes

ZYNC_DATABASE_PASSWORD

Password for the PostgreSQL connection user. Generated randomly if not provided.

N/A

yes

ZYNC_SECRET_KEY_BASE

Secret key base for Zync. Generated randomly if not provided.

N/A

yes

ZYNC_AUTHENTICATION_TOKEN

Authentication token for Zync. Generated randomly if not provided.

N/A

yes

AMP_RELEASE

3scale release tag.

2.8.0

yes

ADMIN_PASSWORD

A randomly generated 3scale administrator account password.

N/A

yes

ADMIN_USERNAME

3scale administrator account username.

admin

yes

APICAST_ACCESS_TOKEN

Read Only Access Token that APIcast will use to download its configuration.

N/A

yes

ADMIN_ACCESS_TOKEN

Admin Access Token with all scopes and write permissions for API access.

N/A

no

WILDCARD_DOMAIN

Root domain for the wildcard routes. For example, a root domain example.com will generate 3scale-admin.example.com.

N/A

yes

TENANT_NAME

Tenant name under the root that Admin Portal will be available with -admin suffix.

3scale

yes

MYSQL_USER

Username for MySQL user that will be used for accessing the database.

mysql

yes

MYSQL_PASSWORD

Password for the MySQL user.

N/A

yes

MYSQL_DATABASE

Name of the MySQL database accessed.

system

yes

MYSQL_ROOT_PASSWORD

Password for Root user.

N/A

yes

SYSTEM_BACKEND_USERNAME

Internal 3scale API username for internal 3scale api auth.

3scale_api_user

yes

SYSTEM_BACKEND_PASSWORD

Internal 3scale API password for internal 3scale api auth.

N/A

yes

REDIS_IMAGE

Redis image to use

registry.redhat.io/rhscl/redis-5-rhel7:5.0

yes

MYSQL_IMAGE

Mysql image to use

registry.redhat.io/rhscl/mysql-57-rhel7:5.7

yes

MEMCACHED_IMAGE

Memcached image to use

registry.redhat.io/3scale-amp2/memcached-rhel7:3scale2.8

yes

POSTGRESQL_IMAGE

Postgresql image to use

registry.redhat.io/rhscl/postgresql-10-rhel7

yes

AMP_SYSTEM_IMAGE

3scale System image to use

registry.redhat.io/3scale-amp2/system-rhel7:3scale2.8

yes

AMP_BACKEND_IMAGE

3scale Backend image to use

registry.redhat.io/3scale-amp2/backend-rhel7:3scale2.8

yes

AMP_APICAST_IMAGE

3scale APIcast image to use

registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.8

yes

AMP_ZYNC_IMAGE

3scale Zync image to use

registry.redhat.io/3scale-amp2/zync-rhel7:3scale2.8

yes

SYSTEM_BACKEND_SHARED_SECRET

Shared secret to import events from backend to system.

N/A

yes

SYSTEM_APP_SECRET_KEY_BASE

System application secret key base

N/A

yes

APICAST_MANAGEMENT_API

Scope of the APIcast Management API. Can be disabled, status or debug. At least status required for health checks.

status

no

APICAST_OPENSSL_VERIFY

Turn on/off the OpenSSL peer verification when downloading the configuration. Can be set to true/false.

false

no

APICAST_RESPONSE_CODES

Enable logging response codes in APIcast.

true

no

APICAST_REGISTRY_URL

A URL which resolves to the location of APIcast policies

http://apicast-staging:8090/policies

yes

MASTER_USER

Master administrator account username

master

yes

MASTER_NAME

The subdomain value for the master Admin Portal, will be appended with the -master suffix

master

yes

MASTER_PASSWORD

A randomly generated master administrator password

N/A

yes

MASTER_ACCESS_TOKEN

A token with master level permissions for API calls

N/A

yes

IMAGESTREAM_TAG_IMPORT_INSECURE

Set to true if the server may bypass certificate verification or connect directly over HTTP during image import.

false

yes

1.6. Using APIcast with 3scale on OpenShift

APIcast is available with API Manager for 3scale hosted, and in on-premises installations in OpenShift Container Platform. The configuration procedures are different for both.

This section explains how to deploy APIcast with API Manager on OpenShift.

1.6.1. Deploying APIcast templates on an existing OpenShift cluster containing 3scale

3scale OpenShift templates contain two embedded APIcast by default. If you require more API gateways, or require separate APIcast deployments, you can deploy additional APIcast templates on your OpenShift cluster.

Perform the following steps to deploy additional API gateways on your OpenShift cluster:

Procedure

  1. Create an access token with the following configurations:

    • Scoped to Account Management API
    • Having read-only access
  2. Log in to your APIcast cluster:

    oc login
  3. Create a secret that allows APIcast to communicate with 3scale. Specify the create secret and apicast-configuration-url-secret parameters with the access token, tenant name, and wildcard domain of your 3scale deployment:

    oc create secret generic apicast-configuration-url-secret --from-literal=password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
    Note

    TENANT_NAME is the name under the root that the Admin Portal will be available with. The default value for TENANT_NAME is 3scale. If you used a custom value in your 3scale deployment, you must use that value here.

  4. Import the APIcast template using the oc new-app command, specifying the --file option with the apicast.yml file:

    oc new-app --file /opt/amp/templates/apicast.yml
    Note

    First install the APIcast template as described in Configuring nodes and entitlements.

1.6.2. Connecting APIcast from a different OpenShift cluster

If you deploy APIcast on a different OpenShift cluster, outside your 3scale cluster, you must connect through the public route:

Procedure

  1. Create an access token with the following configurations:

    • Scoped to Account Management API
    • Having read-only access
  2. Log in to your APIcast cluster:

    oc login
  3. Create a secret that allows APIcast to communicate with 3scale. Specify the create secret and apicast-configuration-url-secret parameters with the access token, tenant name, and wildcard domain of your 3scale deployment:

    oc create secret generic apicast-configuration-url-secret --from-literal=password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
    Note

    TENANT_NAME is the name under the root that the Admin Portal will be available with. The default value for TENANT_NAME is 3scale. If you used a custom value in your 3scale deployment, you must use that value.

  4. Deploy APIcast on a different OpenShift cluster using the oc new-app command. Specify the --file option and the to path to your apicast.yml file:

    oc new-app --file /path/to/file/apicast.yml

1.6.3. Changing the default behavior for embedded APIcast

In external APIcast deployments, you can modify default behavior by changing the template parameters in the APIcast OpenShift template.

In embedded APIcast deployments, 3scale and APIcast are deployed from a single template. You must modify environment variables after deployment if you wish to change the default behavior for the embedded APIcast deployments.

1.6.4. Connecting multiple APIcast deployments on a single OpenShift cluster over internal service routes

If you deploy multiple APIcast gateways into the same OpenShift cluster, you can configure them to connect using internal routes through the backend listener service instead of the default external route configuration.

You must have an OpenShift Software-Defined Networking (SDN) plugin installed to connect over internal service routes. How you connect depends on which SDN you have installed:

ovs-subnet

If you are using the ovs-subnet OpenShift SDN plugin, perform the following steps to connect over internal routes:

Procedure

  1. If not already logged in, log in to your OpenShift cluster:

    oc login
  2. Enter the following command to display the backend-listener route URL:

    oc get route backend
  3. Enter the oc new-app command with the path to apicast.yml:

    oc new-app -f apicast.yml

ovs-multitenant

If you are using the ovs-multitenant OpenShift SDN plugin, perform the following steps to connect over internal routes:

Procedure

  1. If not already logged in, log in to your OpenShift cluster:

    oc login
  2. As administrator, specify the oadm command with the pod-network and join-projects options to set up communication between both projects:

    oadm pod-network join-projects --to=<3SCALE_PROJECT> <APICAST_PROJECT>
  3. Enter the following command to display the backend-listener route URL:

    oc get route backend
  4. Enter the oc new-app command with the path to apicast.yml:

    oc new-app -f apicast.yml

Additional resources

For information on OpenShift SDN and project network isolation, see Openshift SDN.

1.6.5. Connecting APIcast on other deployments

If you deploy APIcast on Docker, you can connect APIcast to 3scale deployed on OpenShift by setting the THREESCALE_PORTAL_ENDPOINT parameter to the URL and access token of your 3scale Admin Portal deployed on OpenShift. You do not need to set the BACKEND_ENDPOINT_OVERRIDE parameter in this case.

Additional resources

For more details, see Deploying APIcast on the Docker containerized environment.

1.7. Deploying 3scale using the operator

This section takes you through installing and deploying the 3scale solution via the 3scale operator, using the APIManager custom resource.

Note
  • Wildcard routes have been removed since 3scale 2.6.

    • This functionality is handled by Zync in the background.
  • When API providers are created, updated, or deleted, routes automatically reflect those changes.

Prerequisites

Follow these procedures to deploy 3scale using the operator:

1.7.1. Deploying the APIManager custom resource

Deploying the APIManager custom resource will make the operator begin processing and will deploy a 3scale solution from it.

Procedure

  1. Click Catalog > Installed Operators.

    1. From the list of Installed Operators, click 3scale Operator.
  2. Click the API Manager tab.
  3. Click Create APIManager.
  4. Clear the sample content and add the following YAML definitions to the editor, then click Create.

    • Before 3scale 2.8, you could configure the automatic addition of replicas by setting the highAvailability field to true. From 3scale 2.8, the addition of replicas is controlled through the replicas field in the APIManager CR as shown in the following example.

      Note

      The wildcardDomain parameter can be any desired name you wish to give that resolves to an IP address, which is a valid DNS domain.

    • APIManager CR with minimum requirements:

      apiVersion: apps.3scale.net/v1alpha1
      kind: APIManager
      metadata:
        name: apimanager-sample
      spec:
        wildcardDomain: example.com
    • APIManager CR with replicas configured:

      apiVersion: apps.3scale.net/v1alpha1
      kind: APIManager
      metadata:
        name: apimanager-sample
      spec:
        wildcardDomain: apimanager-sample
        system:
          appSpec:
            replicas: 1
          sidekiqSpec:
            replicas: 1
        zync:
          appSpec:
            replicas: 1
          queSpec:
            replicas: 1
        backend:
          cronSpec:
            replicas: 1
          listenerSpec:
            replicas: 1
          workerSpec:
            replicas: 1
        apicast:
          productionSpec:
            replicas: 1
          stagingSpec:
            replicas: 1
        wildcardDomain: example.com

1.7.2. Getting the APIManager Admin Portal and Master Admin Portal credentials

To log in to either the 3scale Admin Portal or Master Admin Portal after the operator-based deployment, you need the credentials for each separate portal. To get these credentials:

  1. Run the following commands to get the Admin Portal credentials:

    oc get secret system-seed -o json | jq -r .data.ADMIN_USER | base64 -d
    oc get secret system-seed -o json | jq -r .data.ADMIN_PASSWORD | base64 -d
    1. Log in as the Admin Portal administrator to verify these credentials are working.
  2. Run the following commands to get the Master Admin Portal credentials:

    oc get secret system-seed -o json | jq -r .data.MASTER_USER | base64 -d
    oc get secret system-seed -o json | jq -r .data.MASTER_PASSWORD | base64 -d
    1. Log in as the Master Admin Portal administrator to verify these credentials are working.

Additional resources

For more information about the APIManager fields, refer to the Reference documentation.

1.7.3. Getting the Admin Portal URL

When you deploy 3scale using the operator, a default tenant is created, with a fixed URL: 3scale-admin.${wildcardDomain}

The 3scale Dashboard shows the new portal URL of the tenant. As an example, if the <wildCardDomain> is 3scale-project.example.com, the Admin Portal URL is: https://3scale-admin.3scale-project.example.com.

The wildcardDomain is the <wildCardDomain> parameter you provided during installation. Open this unique URL in a browser using the this command:

xdg-open https://3scale-admin.3scale-project.example.com

Optionally, you can create new tenants on the MASTER portal URL: master.${wildcardDomain}

1.7.4. High availability in 3scale using the operator

High availability (HA) in 3scale using the operator aims to provide uninterrupted uptime if, for example, if one or more databases were to fail.

Note

.spec.highAvailability.enabled is only for external databases.

If you want HA in your 3scale operator-based deployment, note the following:

  • Deploy and configure 3scale critical databases externally, specifically system database, system redis, and backend redis. Make sure you deploy and configure those databases in a way they are highly available.
  • Specify the connection endpoints to those databases for 3scale by pre-creating their corresponding Kubernetes Secrets.

  • Set the .spec.highAvailability.enabled attribute to true when deploying the APIManager CR to enable external database mode for the critical databases: system database, system redis, and backend redis.

Additionally, if you want the zync database to be highly available to avoid zync potentially losing queue jobs data on restart, note the following:

  • Deploy and configure the zync database externally. Make sure you deploy and configure the database in a way that it is highly available.
  • Specify the connection endpoint to the zync database for 3scale by pre-creating its corresponding Kubernetes Secrets.

    • See Zync database secret for more information.
    • Deploy 3scale setting the spec.highAvailability.externalZyncDatabaseEnabled attribute to true to specify zync database as an external database.

1.8. Deployment configuration options for 3scale on OpenShift using the operator

This section provides information about the deployment configuration options for Red Hat 3scale API Management on OpenShift using the operator.

Prerequisites

1.8.1. Default deployment configuration

By default, the following deployment configuration options will be applied:

  • Containers will have Kubernetes resource limits and requests.

    • This ensures a minimum performance level.
    • It limits resources to allow external services and allocation of solutions.
  • Deployment of internal databases.
  • File storage will be based on Persistence Volumes (PV).

    • One will require read, write, execute (RWX) access mode.
    • OpenShift configured to provide them upon request.
  • Deploy MySQL as the internal relational database.

The default configuration option is suitable for proof of concept (PoC) or evaluation by a customer.

One, many, or all of the default configuration options can be overriden with specific field values in the APIManager custom resource. The 3scale operator allows all available combinations whereas templates allow fixed deployment profiles. For example, the 3scale operator allows deployment of 3scale in evaluation mode and external databases mode. Templates do not allow this specific deployment configuration. Templates are only available for the most common configuration options.

1.8.2. Evaluation installation

For and evaluation installtion, containers will not have kubernetes resource limits and requests specified. For example:

  • Small memory footprint
  • Fast startup
  • Runnable on laptop
  • Suitable for presale/sales demos
apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  wildcardDomain: lvh.me
  resourceRequirementsEnabled: false

Check APIManager custom resource for reference.

1.8.3. External databases installation

An external databases installation is suitable for production use where high availability (HA) is a requirement or where you plan to reuse your own databases.

Important

When enabling the 3scale external databases installation mode, all of the following databases are externalized:

  • backend-redis
  • system-redis
  • system-database (mysql, postgresql, or oracle)

3scale 2.8 and above has been tested is supported with the following database versions:

DatabaseVersion

Redis

5.0

MySQL

5.7

PostgreSQL

10.6

Before creating APIManager custom resource to deploy 3scale, you must provide the following connection settings for the external databases using OpenShift secrets.

1.8.3.1. Backend Redis secret

Deploy two external Redis instances and fill in the connection settings as shown in the following example:

apiVersion: v1
kind: Secret
metadata:
  name: backend-redis
stringData:
  REDIS_STORAGE_URL: "redis://backend-redis-storage"
  REDIS_STORAGE_SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379"
  REDIS_STORAGE_SENTINEL_ROLE: "master"
  REDIS_QUEUES_URL: "redis://backend-redis-queues"
  REDIS_QUEUES_SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379"
  REDIS_QUEUES_SENTINEL_ROLE: "master"
type: Opaque

The Secret name must be backend-redis.

1.8.3.2. System Redis secret

Deploy two external Redis instances and fill in the connection settings as shown in the following example:

apiVersion: v1
kind: Secret
metadata:
  name: system-redis
stringData:
  URL: "redis://system-redis"
  SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379"
  SENTINEL_ROLE: "master"
  NAMESPACE: ""
  MESSAGE_BUS_URL: "redis://system-redis-messagebus"
  MESSAGE_BUS_SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379"
  MESSAGE_BUS_SENTINEL_ROLE: "master"
  MESSAGE_BUS_NAMESPACE: ""
type: Opaque

The Secret name must be system-redis.

1.8.3.3. System database secret

Note

The Secret name must be system-database.

When you are deploying 3scale, you have three alternatives for your system database. Configure different attributes and values for each alternative’s related secret.

  • MySQL
  • PostgreSQL
  • Oracle Database

To deploy a MySQL, PostgreSQL, or an Oracle Database system database secret, fill in the connection settings as shown in the following examples:

MySQL system database secret

apiVersion: v1
kind: Secret
metadata:
  name: system-database
stringData:
  URL: "mysql2://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}"
type: Opaque

PostgreSQL system database secret

apiVersion: v1
kind: Secret
metadata:
  name: system-database
stringData:
  URL: "postgresql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}"
type: Opaque

Oracle system database secret

apiVersion: v1
kind: Secret
metadata:
  name: system-database
stringData:
  URL: "oracle-enhanced://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}"
  ORACLE_SYSTEM_PASSWORD: "{SYSTEM_PASSWORD}"
type: Opaque

1.8.3.4. Zync database secret

In a zync database setup, when HighAvailability is enabled, and if the externalZyncDatabaseEnabled field is also enabled, the user has to pre-create a secret named zync. Then set zync with the DATABASE_URL and DATABASE_PASSWORD fields with the values pointing to your externally database. The external database must be in high-availability mode. See the following example:

apiVersion: v1
kind: Secret
metadata:
  name: zync
stringData:
  DATABASE_URL: postgresql://<zync-db-user>:<zync-db-password>@<zync-db-host>:<zync-db-port>/zync_production
  ZYNC_DATABASE_PASSWORD: <zync-db-password>
type: Opaque

1.8.3.5. APIManager custom resources to deploy 3scale

Note
  • When you enable highAvailability, you must pre-create the backend-redis, system-redis, and system-database secrets.
  • When you enable highAvailability and the externalZyncDatabaseEnabled fields together, you must pre-create the zync database secret.

    • Choose only one type of database to externalize in the case of system-database.

Configuration of the APIManager custom resource will depend on whether or not your choice of database is external to your 3scale deployment.

If your backend Redis, system Redis, and system database will be external to 3scale, the APIManager custom resource must have highAvailability set to true. See the following example:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  wildcardDomain: lvh.me
  highAvailability:
    enabled: true

If your zync database will be external, the APIManager custom resource must have highAvailability set to true and externalZyncDatabaseEnabled must also be set to true. See the following example:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  wildcardDomain: lvh.me
  highAvailability:
    enabled: true
    externalZyncDatabaseEnabled: true

1.8.4. Amazon Simple Storage Service 3scale Filestorage installation

The following examples show 3scale FileStorage using Amazon Simple Storage Service (Amazon S3) instead of persistent volume claim (PVC).

Before creating APIManager custom resource to deploy 3scale, connection settings for the S3 service needs to be provided using an openshift secret.

1.8.4.1. Amazon S3 secret

In the following example, Secret name can be anyone, as it will be referenced in the APIManager custom resource.

kind: Secret
metadata:
  creationTimestamp: null
  name: aws-auth
stringData:
  AWS_ACCESS_KEY_ID: 123456
  AWS_SECRET_ACCESS_KEY: 98765544
  AWS_BUCKET: mybucket.example.com
  AWS_REGION: eu-west-1
type: Opaque

Lastly, create the APIManager custom resource to deploy 3scale.

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
  spec:
    wildcardDomain: lvh.me
    system:
      fileStorage:
        simpleStorageService:
          configurationSecretRef:
            name: aws-auth
Note

Amazon S3 region and Amazon S3 bucket settings are provided directly in the APIManager custom resource. The Amazon S3 secret name is provided directly in the APIManager custom resource.

Check APIManager SystemS3Spec for reference.

1.8.5. PostgreSQL installation

A MySQL internal relational database is the default deployment. This deployment configuration can be overriden to use PostgreSQL instead.

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  wildcardDomain: lvh.me
  system:
    database:
      postgresql: {}

Check APIManager DatabaseSpec for reference.

1.8.6. Reconciliation

Once 3scale has been installed, the 3scale operator enables updating a given set of parameters from the custom resource to modify system configuration options. Modifications are made by hot swapping, that is, without stopping or shutting down the system.

Not all the parameters of the APIManager custom resource definitions (CRDs) are reconcilable.

The following is a list of reconcilable parameters:

1.8.6.1. Resources

Resource limits and requests for all 3scale components.

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  ResourceRequirementsEnabled: true/false

1.8.6.2. Backend replicas

Backend components pod count.

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  backend:
    listenerSpec:
      replicas: X
    workerSpec:
      replicas: Y
    cronSpec:
      replicas: Z

1.8.6.3. APIcast replicas

APIcast staging and production components pod count.

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  apicast:
    productionSpec:
      replicas: X
    stagingSpec:
      replicas: Z

1.8.6.4. System replicas

System app and system sidekiq components pod count

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  system:
    appSpec:
      replicas: X
    sidekiqSpec:
      replicas: Z

1.8.6.5. Zync replicas

Zync app and que components pod count

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  zync:
    appSpec:
      replicas: X
    queSpec:
      replicas: Z

1.9. Troubleshooting common 3scale installation issues

This section contains a list of common installation issues and provides guidance for their resolution.

1.9.1. Previous deployment leaving dirty persistent volume claims

Problem

A previous deployment attempt leaves a dirty Persistent Volume Claim (PVC) causing the MySQL container to fail to start.

Cause

Deleting a project in OpenShift does not clean the PVCs associated with it.

Solution

Procedure

  1. Find the PVC containing the erroneous MySQL data with the oc get pvc command:

    # oc get pvc
    NAME                    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
    backend-redis-storage   Bound     vol003    100Gi      RWO,RWX       4d
    mysql-storage           Bound     vol006    100Gi      RWO,RWX       4d
    system-redis-storage    Bound     vol008    100Gi      RWO,RWX       4d
    system-storage          Bound     vol004    100Gi      RWO,RWX       4d
  2. Stop the deployment of the system-mysql pod by clicking cancel deployment in the OpenShift UI.
  3. Delete everything under the MySQL path to clean the volume.
  4. Start a new system-mysql deployment.

1.9.2. Wrong or missing credentials of the authenticated image registry

Problem

Pods are not starting. ImageStreams show the following error:

! error: Import failed (InternalError): ...unauthorized: Please login to the Red Hat Registry

Cause

While installing 3scale on OpenShift 4.x, OpenShift fails to start pods because ImageStreams cannot pull the images they reference. This happens because the pods cannot authenticate against the registries they point to.

Solution

Procedure

  1. Type the following command to verify the configuration of your container registry authentication:

    $ oc get secret
    • If your secret exists, you will see the following output in the terminal:

      threescale-registry-auth          kubernetes.io/dockerconfigjson        1         4m9s
    • However, if you do not see the output, you must do the following:
  2. Use the credentials you previously set up while Creating a registry service account to create your secret.
  3. Use the steps in Configuring registry authentication in OpenShift, replacing <your-registry-service-account-username> and <your-registry-service-account-password> in the oc create secret command provided.
  4. Generate the threescale-registry-auth secret in the same namespace as the APIManager resource. You must run the following inside the <project-name>:

    oc project <project-name>
    oc create secret docker-registry threescale-registry-auth \
      --docker-server=registry.redhat.io \
      --docker-username="<your-registry-service-account-username>" \
      --docker-password="<your-registry-service-account-password>"
      --docker-email="<email-address>"
  5. Delete and recreate the APIManager resource:

    $ oc delete -f apimanager.yaml
    apimanager.apps.3scale.net "example-apimanager" deleted
    
    $ oc create -f apimanager.yaml
    apimanager.apps.3scale.net/example-apimanager created

Verification

  1. Type the following command to confirm that deployments have a status of Starting or Ready. The pods then begin to spawn:

    $ oc describe apimanager
    (...)
    Status:
      Deployments:
        Ready:
          apicast-staging
          system-memcache
          system-mysql
          system-redis
          zync
          zync-database
          zync-que
        Starting:
          apicast-production
          backend-cron
          backend-worker
          system-sidekiq
          system-sphinx
        Stopped:
          backend-listener
          backend-redis
          system-app
  2. Type the following command to see the status of each pod:

    $ oc get pods
    NAME                               READY   STATUS             RESTARTS   AGE
    3scale-operator-66cc6d857b-sxhgm   1/1     Running            0          17h
    apicast-production-1-deploy        1/1     Running            0          17m
    apicast-production-1-pxkqm         0/1     Pending            0          17m
    apicast-staging-1-dbwcw            1/1     Running            0          17m
    apicast-staging-1-deploy           0/1     Completed          0          17m
    backend-cron-1-deploy              1/1     Running            0          17m

1.9.3. Incorrectly pulling from the Docker registry

Problem

The following error occurs during installation:

svc/system-redis - 1EX.AMP.LE.IP:6379
  dc/system-redis deploys docker.io/rhscl/redis-32-rhel7:3.2-5.3
    deployment #1 failed 13 minutes ago: config change

Cause

OpenShift searches for and pulls container images by issuing the docker command. This command refers to the docker.io Docker registry instead of the registry.redhat.io Red Hat container registry.

This occurs when the system contains an unexpected version of the Docker containerized environment.

Solution

Procedure

Use the appropriate version of the Docker containerized environment.

1.9.4. Permission issues for MySQL when persistent volumes are mounted locally

Problem

The system-msql pod crashes and does not deploy causing other systems dependant on it to fail deployment. The pod log displays the following error:

[ERROR] Cannot start server : on unix socket: Permission denied
[ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ?
[ERROR] Aborting

Cause

The MySQL process is started with inappropriate user permissions.

Solution

Procedure

  1. The directories used for the persistent volumes MUST have the write permissions for the root group. Having read-write permissions for the root user is not enough as the MySQL service runs as a different user in the root group. Execute the following command as the root user:

    chmod -R g+w /path/for/pvs
  2. Execute the following command to prevent SElinux from blocking access:

    chcon -Rt svirt_sandbox_file_t /path/for/pvs

1.9.5. Unable to upload logo or images

Problem

Unable to upload a logo - system-app logs display the following error:

Errno::EACCES (Permission denied @ dir_s_mkdir - /opt/system/public//system/provider-name/2

Cause

Persistent volumes are not writable by OpenShift.

Solution

Procedure

Ensure your persistent volume is writable by OpenShift. It should be owned by root group and be group writable.

1.9.6. Test calls not working on OpenShift

Problem

Test calls do not work after creation of a new service and routes on OpenShift. Direct calls via curl also fail, stating: service not available.

Cause

3scale requires HTTPS routes by default, and OpenShift routes are not secured.

Solution

Procedure

Ensure the secure route checkbox is clicked in your OpenShift router settings.

1.9.7. APIcast on a different project from 3scale failing to deploy

Problem

APIcast deploy fails (pod does not turn blue). You see the following error in the logs:

update acceptor rejected apicast-3: pods for deployment "apicast-3" took longer than 600 seconds to become ready

You see the following error in the pod:

Error synching pod, skipping: failed to "StartContainer" for "apicast" with RunContainerError: "GenerateRunContainerOptions: secrets \"apicast-configuration-url-secret\" not found"

Cause

The secret was not properly set up.

Solution

Procedure

When creating a secret with APIcast v3, specify apicast-configuration-url-secret:

oc create secret generic apicast-configuration-url-secret --from-literal=password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>