Chapter 2. Installing 3scale on OpenShift

This section walks you through steps to deploy Red Hat 3scale API Management 2.11 on OpenShift.

The Red Hat 3scale API Management solution for on-premises deployment is composed of:

  • Two API gateways: embedded APIcast
  • One 3scale Admin Portal and Developer Portal with persistent storage

There are two ways to deploy a 3scale solution:

Note
  • Whether deploying 3scale using the operator or via templates, you must first configure registry authentication to the Red Hat container registry. See Configuring container registry authentication.
  • The 3scale Istio Adapter is available as an optional adapter that allows labeling a service running within the Red Hat OpenShift Service Mesh, and integrate that service with Red Hat 3scale API Management. Refer to 3scale adapter documentation for more information.

Prerequisites

To install 3scale on OpenShift, perform the steps outlined in the following sections:

2.1. System requirements for installing 3scale on OpenShift

This section lists the requirements for the 3scale - OpenShift template.

2.1.1. Environment requirements

Red Hat 3scale API Management requires an environment specified in supported configurations.

If you are using local filesystem storage:

Persistent volumes

  • 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence
  • 1 RWX (ReadWriteMany) persistent volume for Developer Portal content and System-app Assets

Configure the RWX persistent volume to be group writable. For a list of persistent volume types that support the required access modes, see the OpenShift documentation.

If you are using an Amazon Simple Storage Service (Amazon S3) bucket for content management system (CMS) storage:

Persistent volumes

  • 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence

Storage

  • 1 Amazon S3 bucket
  • Network File System (NFS)

2.1.2. Hardware requirements

Hardware requirements depend on your usage needs. Red Hat recommends that you test and configure your environment to meet your specific requirements. The following are the recommendations when configuring your environment for 3scale on OpenShift:

  • Compute optimized nodes for deployments on cloud environments (AWS c4.2xlarge or Azure Standard_F8).
  • Very large installations may require a separate node (AWS M4 series or Azure Av2 series) for Redis if memory requirements exceed your current node’s available RAM.
  • Separate nodes between routing and compute tasks.
  • Dedicated computing nodes for 3scale specific tasks.
  • Set the PUMA_WORKERS variable of the back-end listener to the number of cores in your compute node.

2.2. Configuring nodes and entitlements

Before deploying 3scale on OpenShift, you must configure the necessary nodes and the entitlements for the environment to fetch images from the Red Hat Ecosystem Catalog. Perform the following steps to configure the nodes and entitlements:

Procedure

  1. Install Red Hat Enterprise Linux (RHEL) on each of your nodes.
  2. Register your nodes with Red Hat using the Red Hat Subscription Manager (RHSM), via the interface or the command line.
  3. Attach your nodes to your 3scale subscription using RHSM.
  4. Install OpenShift on your nodes, complying with the following requirements:

  5. Install the OpenShift command line interface.
  6. Enable access to the rhel-7-server-3scale-amp-2-rpms repository using the subscription manager:

    sudo subscription-manager repos --enable=rhel-7-server-3scale-amp-2-rpms
  7. Install the 3scale template called 3scale-amp-template. This will be saved at /opt/amp/templates.

    sudo yum install 3scale-amp-template

2.2.1. Configuring Amazon Simple Storage Service

Important

Skip this section, if you are deploying 3scale with the local filesystem storage.

If you want to use an Amazon Simple Storage Service (Amazon S3) bucket as storage, you must configure your bucket before you can deploy 3scale on OpenShift.

Perform the following steps to configure your Amazon S3 bucket for 3scale:

  1. Create an Identity and Access Management (IAM) policy with the following minimum permissions:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "s3:ListAllMyBuckets",
                "Resource": "arn:aws:s3:::*"
            },
            {
                "Effect": "Allow",
                "Action": "s3:*",
                "Resource": [
                    "arn:aws:s3:::targetBucketName",
                    "arn:aws:s3:::targetBucketName/*"
                ]
            }
        ]
    }
  2. Create a CORS configuration with the following rules:

    <?xml version="1.0" encoding="UTF-8"?>
    <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>https://*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
    </CORSRule>
    </CORSConfiguration>

2.3. Deploying 3scale on OpenShift using a template

Note

OpenShift Container Platform (OCP) 4.x supports deployment of 3scale using the operator only. See Deploying 3scale using the operator.

Prerequisites

  • An OpenShift cluster configured as specified in the Configuring nodes and entitlements section.
  • A domain that resolves to your OpenShift cluster.
  • Access to the Red Hat Ecosystem Catalog.
  • (Optional) An Amazon Simple Storage Service (Amazon S3) bucket for content management system (CMS) storage outside of the local filesystem.
  • (Optional) A deployment with PostgreSQL.

    • This is the same as the default deployment on Openshift, however it uses PostgreSQL as an internal system database.
  • (Optional) A working SMTP server for email functionality.
Note

Deploying 3scale on OpenShift using a template is based on OpenShift Container Platform 3.11

Follow these procedures to install 3scale on OpenShift using a .yml template:

2.4. Configuring container registry authentication

As a 3scale administrator, configure authentication with registry.redhat.io before you deploy 3scale container images on OpenShift.

Prerequisites

  • Cluster administrator access to an OpenShift Container Platform cluster.
  • OpenShift oc client tool is installed. For more details, see the OpenShift CLI documentation.

Procedure

  1. Log into your OpenShift cluster as administrator:

    $ oc login -u system:admin
  2. Open the project in which you want to deploy 3scale:

    oc project your-openshift-project
  3. Create a docker-registry secret using your Red Hat Customer Portal account, replacing threescale-registry-auth with the secret to create:

    $ oc create secret docker-registry threescale-registry-auth \
      --docker-server=registry.redhat.io \
      --docker-username=CUSTOMER_PORTAL_USERNAME \
      --docker-password=CUSTOMER_PORTAL_PASSWORD \
      --docker-email=EMAIL_ADDRESS

    You will see the following output:

    secret/threescale-registry-auth created
  4. Link the secret to your service account to use the secret for pulling images. The service account name must match the name that the OpenShift pod uses. This example uses the default service account:

    $ oc secrets link default threescale-registry-auth --for=pull
  5. Link the secret to the builder service account to use the secret for pushing and pulling build images:

    $ oc secrets link builder threescale-registry-auth

Additional resources

For more details on authenticating with Red Hat for container images:

2.4.1. Creating registry service accounts

To use container images from registry.redhat.io in a shared environment with 3scale 2.11 deployed on OpenShift, you must use a Registry Service Account instead of an individual user’s Customer Portal credentials.

Note

It is a requirement for 3scale 2.8 and greater that you follow the steps outlined below before deploying either on OpenShift using a template or via the operator, as both options use registry authentication.

Procedure

  1. Navigate to the Registry Service Accounts page and log in.
  2. Click New Service Account. Fill in the form on the Create a New Registry Service Account page.

    1. Add a name for the service account.

      Note: You will see a fixed-length, randomly generated number string before the form field.

  3. Enter a Description.
  4. Click Create.
  5. Navigate back to your Service Accounts.
  6. Click the Service Account you created.
  7. Make a note of the username, including the prefix string, for example 12345678|username, and your password.

    1. This username and password is used to log in to registry.redhat.io.

      Note

      There are tabs available on the Token Information page that show you how to use the authentication token. For example, the Token Information tab shows the username in the format 12345678|username and the password string below it.

2.4.2. Modifying registry service accounts

Service accounts can be modified or deleted. This can done from the Registry Service Account page using the pop-up menu to the right of each authentication token in the table.

Warning

The regeneration or removal of service accounts impacts systems that are using the token to authenticate and retrieve content from registry.redhat.io.

A description for each function is as follows:

  • Regenerate token: Allows an authorized user to reset the password associated with the Service Account.

    Note: The username for the Service Account cannot be changed.

  • Update Description: Allows an authorized user to update the description for the Service Account.
  • Delete Account: Allows an authorized user to remove the Service Account.

2.4.3. Importing the 3scale template

Note
  • Wildcard routes have been removed as of 3scale 2.6.

    • This functionality is handled by Zync in the background.
  • When API providers are created, updated, or deleted, routes automatically reflect those changes.

Perform the following steps to import the 3scale template into your OpenShift cluster:

Procedure

  1. From a terminal session log in to OpenShift as the cluster administrator:

    oc login
  2. Select your project, or create a new project:

    oc project <project_name>
    oc new-project <project_name>
  3. Enter the oc new-app command:

    1. Specify the --file option with the path to the amp.yml file you downloaded as part of Configuring nodes and entitlements.
    2. Specify the --param option with the WILDCARD_DOMAIN parameter set to the domain of your OpenShift cluster:

      oc new-app --file /opt/amp/templates/amp.yml --param WILDCARD_DOMAIN=<WILDCARD_DOMAIN>

      The terminal shows the master and tenant URLs and credentials for your newly created 3scale Admin Portal. This output should include the following information:

      • master admin username
      • master password
      • master token information
      • tenant username
      • tenant password
      • tenant token information
  4. Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.

    * With parameters:
    
     * ADMIN_PASSWORD=xXxXyz123 # generated
     * ADMIN_USERNAME=admin
     * TENANT_NAME=user
    
     * MASTER_NAME=master
     * MASTER_USER=master
     * MASTER_PASSWORD=xXxXyz123 # generated
    
    --> Success
    Access your application via route 'user-admin.3scale-project.example.com'
    Access your application via route 'master-admin.3scale-project.example.com'
    Access your application via route 'backend-user.3scale-project.example.com'
    Access your application via route 'user.3scale-project.example.com'
    Access your application via route 'api-user-apicast-staging.3scale-project.example.com'
    Access your application via route 'api-user-apicast-production.3scale-project.example.com'
  5. Make a note of these details for future reference.
  6. The 3scale deployment on OpenShift has been successful when the following command returns:

    oc wait --for=condition=available --timeout=-1s $(oc get dc --output=name)
    Note

    When the 3scale deployment on OpenShift has been successful, your login credentials will work.

2.4.4. Getting the Admin Portal URL

When you deploy 3scale using the template, a default tenant is created, with a fixed URL: 3scale-admin.${wildcardDomain}

The 3scale Dashboard shows the new portal URL of the tenant. As an example, if the <wildCardDomain> is 3scale-project.example.com, the Admin Portal URL is: https://3scale-admin.3scale-project.example.com.

The wildcardDomain is the <wildCardDomain> parameter you provided during installation. Open this unique URL in a browser using the this command:

xdg-open https://3scale-admin.3scale-project.example.com

Optionally, you can create new tenants on the MASTER portal URL: master.${wildcardDomain}

2.4.5. Deploying 3scale with Amazon Simple Storage Service

Deploying 3scale with Amazon Simple Storage Service (Amazon S3) is an optional procedure. Deploy 3scale with Amazon S3 using the following steps:

Procedure

  1. Download amp-s3.yml.
  2. Log in to OpenShift from a terminal session :

    oc login
  3. Select your project, or create a new project:

    oc project <project_name>

    OR

oc new-project <project_name>
  1. Enter the oc new-app command:

    • Specify the --file option with the path to the amp-s3.yml file.
    • Specify the --param options with the following values:

      • WILDCARD_DOMAIN: the parameter set to the domain of your OpenShift cluster.
      • AWS_BUCKET: with your target bucket name.
      • AWS_ACCESS_KEY_ID: with your AWS credentials ID.
      • AWS_SECRET_ACCESS_KEY: with your AWS credentials KEY.
      • AWS_REGION: with the AWS: region of your bucket.
      • AWS_HOSTNAME: Default: Amazon endpoints - AWS S3 compatible provider endpoint hostname.
      • AWS_PROTOCOL: Default: HTTPS - AWS S3 compatible provider endpoint protocol.
      • AWS_PATH_STYLE: Default: false - When set to true, the bucket name is always left in the request URI and never moved to the host as a sub-domain.
    • Optionally, specify the --param option with the TENANT_NAME parameter to set a custom name for the Admin Portal. If omitted, this defaults to 3scale

      oc new-app --file /path/to/amp-s3.yml \
      	--param WILDCARD_DOMAIN=<a-domain-that-resolves-to-your-ocp-cluster.com> \
      	--param TENANT_NAME=3scale \
      	--param AWS_ACCESS_KEY_ID=<your-aws-access-key-id> \
      	--param AWS_SECRET_ACCESS_KEY=<your-aws-access-key-secret> \
      	--param AWS_BUCKET=<your-target-bucket-name> \
      	--param AWS_REGION=<your-aws-bucket-region> \
      	--param FILE_UPLOAD_STORAGE=s3

      The terminal shows the master and tenant URLs, as well as credentials for your newly created 3scale Admin Portal. This output should include the following information:

    • master admin username
    • master password
    • master token information
    • tenant username
    • tenant password
    • tenant token information
  2. Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.

    ...
    
    * With parameters:
     * ADMIN_PASSWORD=xXxXyz123 # generated
     * ADMIN_USERNAME=admin
     * TENANT_NAME=user
     ...
    
     * MASTER_NAME=master
     * MASTER_USER=master
     * MASTER_PASSWORD=xXxXyz123 # generated
     ...
    
    --> Success
    Access your application via route 'user-admin.3scale-project.example.com'
    Access your application via route 'master-admin.3scale-project.example.com'
    Access your application via route 'backend-user.3scale-project.example.com'
    Access your application via route 'user.3scale-project.example.com'
    Access your application via route 'api-user-apicast-staging.3scale-project.example.com'
    Access your application via route 'api-user-apicast-production.3scale-project.example.com'
    Access your application via route 'apicast-wildcard.3scale-project.example.com'
    
    ...
  3. Make a note of these details for future reference.
  4. The 3scale deployment on OpenShift has been successful when the following command returns:

    oc wait --for=condition=available --timeout=-1s $(oc get dc --output=name)
    Note

    When the 3scale deployment on OpenShift has been successful, your login credentials will work.

2.4.6. Deploying 3scale with PostgreSQL

Deploying 3scale with PostgreSQL is an optional procedure. Deploy 3scale with PostgreSQL using the following steps:

Procedure

  1. Download amp-postgresql.yml.
  2. Log in to OpenShift from a terminal session :

    oc login
  3. Select your project, or create a new project:

    oc project <project_name>

    OR

oc new-project <project_name>
  1. Enter the oc new-app command:

    • Specify the --file option with the path to the amp-postgresql.yml file.
    • Specify the --param options with the following values:
    • WILDCARD_DOMAIN: the parameter set to the domain of your OpenShift cluster.
    • Optionally, specify the --param option with the TENANT_NAME parameter to set a custom name for the Admin Portal. If omitted, this defaults to 3scale

      oc new-app --file /path/to/amp-postgresql.yml \
      	--param WILDCARD_DOMAIN=<a-domain-that-resolves-to-your-ocp-cluster.com> \
      	--param TENANT_NAME=3scale \

      The terminal shows the master and tenant URLs, as well as the credentials for your newly created 3scale Admin Portal. This output should include the following information:

    • master admin username
    • master password
    • master token information
    • tenant username
    • tenant password
    • tenant token information
  2. Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.

    ...
    
    * With parameters:
     * ADMIN_PASSWORD=xXxXyz123 # generated
     * ADMIN_USERNAME=admin
     * TENANT_NAME=user
     ...
    
     * MASTER_NAME=master
     * MASTER_USER=master
     * MASTER_PASSWORD=xXxXyz123 # generated
     ...
    
    --> Success
    Access your application via route 'user-admin.3scale-project.example.com'
    Access your application via route 'master-admin.3scale-project.example.com'
    Access your application via route 'backend-user.3scale-project.example.com'
    Access your application via route 'user.3scale-project.example.com'
    Access your application via route 'api-user-apicast-staging.3scale-project.example.com'
    Access your application via route 'api-user-apicast-production.3scale-project.example.com'
    Access your application via route 'apicast-wildcard.3scale-project.example.com'
    
    ...
  3. Make a note of these details for future reference.
  4. The 3scale deployment on OpenShift has been successful when the following command returns:

    oc wait --for=condition=available --timeout=-1s $(oc get dc --output=name)
    Note

    When the 3scale deployment on OpenShift has been successful, your login and credentials will work.

2.4.7. Configuring SMTP variables (optional)

OpenShift uses email to send notifications and invite new users. If you intend to use these features, you must provide your own SMTP server and configure SMTP variables in the system-smtp secret.

Perform the following steps to configure the SMTP variables in the system-smtp secret:

Procedure

  1. If you are not already logged in, log in to OpenShift:

    oc login
    1. Using the oc patch command, specify the secret type where system-smtp is the name of the secret, followed by the -p option, and write the new values in JSON for the following variables:

      VariableDescription

      address

      Allows you to specify a remote mail server as a relay

      username

      Specify your mail server username

      password

      Specify your mail server password

      domain

      Specify a HELO domain

      port

      Specify the port on which the mail server is listening for new connections

      authentication

      Specify the authentication type of your mail server. Allowed values: plain (sends the password in the clear), login (send password Base64 encoded), or cram_md5 (exchange information and a cryptographic Message Digest 5 algorithm to hash important information)

      openssl.verify.mode

      Specify how OpenSSL checks certificates when using TLS. Allowed values: none or peer.

      Example

      oc patch secret system-smtp -p '{"stringData":{"address":"<your_address>"}}'
      oc patch secret system-smtp -p '{"stringData":{"username":"<your_username>"}}'
      oc patch secret system-smtp -p '{"stringData":{"password":"<your_password>"}}'
  2. After you have set the secret variables, redeploy the system-app and system-sidekiq pods:

    oc rollout latest dc/system-app
    oc rollout latest dc/system-sidekiq
  3. Check the status of the rollout to ensure it has finished:

    oc rollout status dc/system-app
    oc rollout status dc/system-sidekiq

2.5. Parameters of the 3scale template

Template parameters configure environment variables of the 3scale (amp.yml) template during and after deployment.

Table 2.1. Template parameters

NameDescriptionDefault ValueRequired?

APP_LABEL

Used for object app labels

3scale-api-management

yes

ZYNC_DATABASE_PASSWORD

Password for the PostgreSQL connection user. Generated randomly if not provided.

N/A

yes

ZYNC_SECRET_KEY_BASE

Secret key base for Zync. Generated randomly if not provided.

N/A

yes

ZYNC_AUTHENTICATION_TOKEN

Authentication token for Zync. Generated randomly if not provided.

N/A

yes

AMP_RELEASE

3scale release tag.

2.11.0

yes

ADMIN_PASSWORD

A randomly generated 3scale administrator account password.

N/A

yes

ADMIN_USERNAME

3scale administrator account username.

admin

yes

APICAST_ACCESS_TOKEN

Read Only Access Token that APIcast will use to download its configuration.

N/A

yes

ADMIN_ACCESS_TOKEN

Admin Access Token with all scopes and write permissions for API access.

N/A

no

WILDCARD_DOMAIN

Root domain for the wildcard routes. For example, a root domain example.com will generate 3scale-admin.example.com.

N/A

yes

TENANT_NAME

Tenant name under the root that Admin Portal will be available with -admin suffix.

3scale

yes

MYSQL_USER

Username for MySQL user that will be used for accessing the database.

mysql

yes

MYSQL_PASSWORD

Password for the MySQL user.

N/A

yes

MYSQL_DATABASE

Name of the MySQL database accessed.

system

yes

MYSQL_ROOT_PASSWORD

Password for Root user.

N/A

yes

SYSTEM_BACKEND_USERNAME

Internal 3scale API username for internal 3scale api auth.

3scale_api_user

yes

SYSTEM_BACKEND_PASSWORD

Internal 3scale API password for internal 3scale api auth.

N/A

yes

REDIS_IMAGE

Redis image to use

registry.redhat.io/rhscl/redis-5-rhel7:5.0

yes

MYSQL_IMAGE

Mysql image to use

registry.redhat.io/rhscl/mysql-57-rhel7:5.7

yes

MEMCACHE_SERVERS

Comma-delimited string of memcache servers, creating a ring of memcache servers to be used by system-* pods.

system-memcache:11211

yes

For example: MEMCACHE_SERVERS="cache-1.us-east.domain.com:11211,cache-3.us-east.domain.com:11211,cache-2.us-east.domain.com:11211"

MEMCACHED_IMAGE

Memcached image to use

registry.redhat.io/3scale-amp2/memcached-rhel7:3scale2.11

yes

POSTGRESQL_IMAGE

Postgresql image to use

registry.redhat.io/rhscl/postgresql-10-rhel7

yes

AMP_SYSTEM_IMAGE

3scale System image to use

registry.redhat.io/3scale-amp2/system-rhel7:3scale2.11

yes

AMP_BACKEND_IMAGE

3scale Backend image to use

registry.redhat.io/3scale-amp2/backend-rhel7:3scale2.11

yes

AMP_APICAST_IMAGE

3scale APIcast image to use

registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.11

yes

AMP_ZYNC_IMAGE

3scale Zync image to use

registry.redhat.io/3scale-amp2/zync-rhel7:3scale2.11

yes

SYSTEM_BACKEND_SHARED_SECRET

Shared secret to import events from backend to system.

N/A

yes

SYSTEM_APP_SECRET_KEY_BASE

System application secret key base

N/A

yes

APICAST_MANAGEMENT_API

Scope of the APIcast Management API. Can be disabled, status or debug. At least status required for health checks.

status

no

APICAST_OPENSSL_VERIFY

Turn on/off the OpenSSL peer verification when downloading the configuration. Can be set to true/false.

false

no

APICAST_RESPONSE_CODES

Enable logging response codes in APIcast.

true

no

APICAST_REGISTRY_URL

A URL which resolves to the location of APIcast policies

http://apicast-staging:8090/policies

yes

MASTER_USER

Master administrator account username

master

yes

MASTER_NAME

The subdomain value for the master Admin Portal, will be appended with the -master suffix

master

yes

MASTER_PASSWORD

A randomly generated master administrator password

N/A

yes

MASTER_ACCESS_TOKEN

A token with master level permissions for API calls

N/A

yes

IMAGESTREAM_TAG_IMPORT_INSECURE

Set to true if the server may bypass certificate verification or connect directly over HTTP during image import.

false

yes

2.6. Deploying 3scale using the operator

This section takes you through installing and deploying the 3scale solution via the 3scale operator, using the APIManager custom resource.

Note
  • Wildcard routes have been removed since 3scale 2.6.

    • This functionality is handled by Zync in the background.
  • When API providers are created, updated, or deleted, routes automatically reflect those changes.

Prerequisites

Follow these procedures to deploy 3scale using the operator:

2.6.1. Deploying the APIManager custom resource

Deploying the APIManager custom resource will make the operator begin processing and will deploy a 3scale solution from it.

Procedure

  1. Click Operators > Installed Operators.

    1. From the list of Installed Operators, click 3scale Operator.
  2. Click the API Manager tab.
  3. Click Create APIManager.
  4. Clear the sample content and add the following YAML definitions to the editor, then click Create.

    • Before 3scale 2.8, you could configure the automatic addition of replicas by setting the highAvailability field to true. From 3scale 2.8, the addition of replicas is controlled through the replicas field in the APIManager CR as shown in the following example.

      Note

      The wildcardDomain parameter can be any desired name you wish to give that resolves to an IP address, which is a valid DNS domain.

    • APIManager CR with minimum requirements:

      apiVersion: apps.3scale.net/v1alpha1
      kind: APIManager
      metadata:
        name: apimanager-sample
      spec:
        wildcardDomain: example.com
    • APIManager CR with replicas configured:

      apiVersion: apps.3scale.net/v1alpha1
      kind: APIManager
      metadata:
        name: apimanager-sample
      spec:
        system:
          appSpec:
            replicas: 1
          sidekiqSpec:
            replicas: 1
        zync:
          appSpec:
            replicas: 1
          queSpec:
            replicas: 1
        backend:
          cronSpec:
            replicas: 1
          listenerSpec:
            replicas: 1
          workerSpec:
            replicas: 1
        apicast:
          productionSpec:
            replicas: 1
          stagingSpec:
            replicas: 1
        wildcardDomain: example.com

2.6.2. Getting the APIManager Admin Portal and Master Admin Portal credentials

To log in to either the 3scale Admin Portal or Master Admin Portal after the operator-based deployment, you need the credentials for each separate portal. To get these credentials:

  1. Run the following commands to get the Admin Portal credentials:

    oc get secret system-seed -o json | jq -r .data.ADMIN_USER | base64 -d
    oc get secret system-seed -o json | jq -r .data.ADMIN_PASSWORD | base64 -d
    1. Log in as the Admin Portal administrator to verify these credentials are working.
  2. Run the following commands to get the Master Admin Portal credentials:

    oc get secret system-seed -o json | jq -r .data.MASTER_USER | base64 -d
    oc get secret system-seed -o json | jq -r .data.MASTER_PASSWORD | base64 -d
    1. Log in as the Master Admin Portal administrator to verify these credentials are working.

Additional resources

For more information about the APIManager fields, refer to the Reference documentation.

2.6.3. Getting the Admin Portal URL

When you deploy 3scale using the operator, a default tenant is created, with a fixed URL: 3scale-admin.${wildcardDomain}

The 3scale Dashboard shows the new portal URL of the tenant. As an example, if the <wildCardDomain> is 3scale-project.example.com, the Admin Portal URL is: https://3scale-admin.3scale-project.example.com.

The wildcardDomain is the <wildCardDomain> parameter you provided during installation. Open this unique URL in a browser using the this command:

xdg-open https://3scale-admin.3scale-project.example.com

Optionally, you can create new tenants on the MASTER portal URL: master.${wildcardDomain}

2.6.4. Configuring automated application of micro releases

To obtain micro release updates and have them be applied automatically, the 3scale operator’s approval strategy must be set to Automatic. The following describes the differences between Automatic and Manual settings and outlines the steps in a procedure to change from one to the other.

Automatic and manual:

  • During installation, the Automatic setting is the selected option by default. Installation of new updates occur as they become available. You can change this during the install or any time afterwards.
  • If you select the Manual option during installation or at any time afterwards, you will receive updates when they are available. Next, you must approve the Install Plan and apply it yourself.

Procedure

  1. Click Operators > Installed Operators.
  2. Click 3scale API Management from the list of Installed Operators.
  3. Click the Subscription tab. Under the Subscription Details heading you will see the subheading Approval.
  4. Click the link below Approval. The link is set to Automatic by default. A modal with the heading, Change Update Approval Strategy will pop up.
  5. Choose the option of your preference: Automatic (default) or Manual, and then click Save.

Additional resources

2.6.5. High availability in 3scale using the operator

High availability (HA) in 3scale using the operator aims to provide uninterrupted uptime if, for example, if one or more databases were to fail.

If you want HA in your 3scale operator-based deployment, note the following:

  • Deploy and configure 3scale critical databases externally, specifically system database, system redis, and backend redis. Make sure you deploy and configure those databases in a way they are highly available.
  • Specify the connection endpoints to those databases for 3scale by pre-creating their corresponding Kubernetes Secrets.

  • Set the .spec.highAvailability.enabled attribute to true when deploying the APIManager CR to enable external database mode for the critical databases: system database, system redis, and backend redis.

Additionally, if you want the zync database to be highly available to avoid zync potentially losing queue jobs data on restart, note the following:

  • Deploy and configure the zync database externally. Make sure you deploy and configure the database in a way that it is highly available.
  • Specify the connection endpoint to the zync database for 3scale by pre-creating its corresponding Kubernetes Secrets.

  • Deploy 3scale setting the spec.highAvailability.externalZyncDatabaseEnabled attribute to true to specify zync database as an external database.

2.7. Deployment configuration options for 3scale on OpenShift using the operator

This section provides information about the deployment configuration options for Red Hat 3scale API Management on OpenShift using the operator.

Prerequisites

2.7.1. Configuring proxy parameters for embedded APIcast

As a 3scale administrator, you can configure proxy parameters for embedded APIcast staging and production. This section provides reference information for specifying proxy parameters in an APIManager custom resource. In other words, you are using the 3scale operator (an APIManager custom resource) to deploy 3scale on OpenShift.

You can specify these parameters when you deploy an APIManager CR for the first time or you can update a deployed APIManager CR and the operator will reconcile the update. See Deploying the APIManager custom resource.

There are four proxy-related configuration parameters for embedded APIcast:

  • allProxy
  • httpProxy
  • httpsProxy
  • noProxy

allProxy

The allProxy parameter specifies an HTTP or HTTPS proxy to be used for connecting to services when a request does not specify a protocol-specific proxy.

After you set up a proxy, configure APIcast by setting the allProxy parameter to the address of the proxy. Authentication is not supported for the proxy. In other words, APIcast does not send authenticated requests to the proxy.

The value of the allProxy parameter is a string, there is no default, and the parameter is not required. Use this format to set the spec.apicast.productionSpec.allProxy parameter or the spec.apicast.stagingSpec.allProxy parameter:

<scheme>://<host>:<port>

For example:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
   name: example-apimanager
spec:
   apicast:
      productionSpec:
         allProxy: http://forward-proxy:80
      stagingSpec:
         allProxy: http://forward-proxy:81

httpProxy

The httpProxy parameter specifies an HTTP proxy to be used for connecting to HTTP services.

After you set up a proxy, configure APIcast by setting the httpProxy parameter to the address of the proxy. Authentication is not supported for the proxy. In other words, APIcast does not send authenticated requests to the proxy.

The value of the httpProxy parameter is a string, there is no default, and the parameter is not required. Use this format to set the spec.apicast.productionSpec.httpProxy parameter or the spec.apicast.stagingSpec.httpProxy parameter:

http://<host>:<port>

For example:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
   name: example-apimanager
spec:
   apicast:
      productionSpec:
         httpProxy: http://forward-proxy:80
      stagingSpec:
         httpProxy: http://forward-proxy:81

httpsProxy

The httpsProxy parameter specifies an HTTPS proxy to be used for connecting to services.

After you set up a proxy, configure APIcast by setting the httpsProxy parameter to the address of the proxy. Authentication is not supported for the proxy. In other words, APIcast does not send authenticated requests to the proxy.

The value of the httpsProxy parameter is a string, there is no default, and the parameter is not required. Use this format to set the spec.apicast.productionSpec.httpsProxy parameter or the spec.apicast.stagingSpec.httpsProxy parameter:

https://<host>:<port>

For example:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
   name: example-apimanager
spec:
   apicast:
      productionSpec:
         httpsProxy: https://forward-proxy:80
      stagingSpec:
         httpsProxy: https://forward-proxy:81

noProxy

The noProxy parameter specifies a comma-separated list of hostnames and domain names. When a request contains one of these names, APIcast does not proxy the request.

If you need to stop access to the proxy, for example during maintenance operations, set the noProxy parameter to an asterisk (*). This matches all hosts specified in all requests and effectively disables any proxies.

The value of the noProxy parameter is a string, there is no default, and the parameter is not required. Specify a comma-separated string to set the spec.apicast.productionSpec.noProxy parameter or the spec.apicast.stagingSpec.noProxy parameter. For example:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
   name: example-apimanager
spec:
   apicast:
      productionSpec:
         noProxy: theStore,company.com,big.red.com
      stagingSpec:
         noProxy: foo,bar.com,.extra.dot.com

2.7.2. Injecting custom environments with the 3scale operator

In a 3scale installation that uses embedded APIcast, you can use the 3scale operator to inject custom environments. Embedded APIcast is also referred to as managed or hosted APIcast. A custom environment defines behavior that APIcast applies to all upstream APIs that the gateway serves. To create a custom environment, define a global configuration in Lua code.

You can inject a custom environment before or after 3scale installation. After injecting a custom environment and after 3scale installation, you can remove a custom environment. The 3scale operator reconciles the changes.

Prerequisites

  • The 3scale operator is installed.

Procedure

  1. Write Lua code that defines the custom environment that you want to inject. For example, the following env1.lua file shows a custom logging policy that the 3scale operator loads for all services.

    local cjson = require('cjson')
    local PolicyChain = require('apicast.policy_chain')
    local policy_chain = context.policy_chain
    
    local logging_policy_config = cjson.decode([[
    {
      "enable_access_logs": false,
      "custom_logging": "\"{{request}}\" to service {{service.id}} and {{service.name}}"
    }
    ]])
    
    policy_chain:insert( PolicyChain.load_policy('logging', 'builtin', logging_policy_config), 1)
    
    return {
      policy_chain = policy_chain,
      port = { metrics = 9421 },
    }
  2. Create a secret from the Lua file that defines the custom environment. For example:

    oc create secret generic custom-env-1 --from-file=./env1.lua

    A secret can contain multiple custom environments. Specify the `–from-file option for each file that defines a custom environment. The operator loads each custom environment.

  3. Define an APIManager custom resource that references the secret you just created. The following example shows only content relative to referencing the secret that defines the custom environment.

    apiVersion: apps.3scale.net/v1alpha1
    kind: APIManager
    metadata:
      name: apimanager-apicast-custom-environment
    spec:
      wildcardDomain: <desired-domain>
      apicast:
        productionSpec:
          customEnvironments:
            - secretRef:
                name: custom-env-1
        stagingSpec:
          customEnvironments:
            - secretRef:
                name: custom-env-1

    An APIManager custom resource can reference multiple secrets that define custom environments. The operator loads each custom environment.

  4. Create the APIManager custom resource that adds the custom environment. For example:

    oc apply -f apimanager.yaml

Next steps

You cannot update the content of a secret that defines a custom environment. If you need to update the custom environment you can do either of the following:

  • The recommended option is to create a secret with a different name and update the APIManager custom resource field, customEnvironments[].secretRef.name. The operator triggers a rolling update and loads the updated custom environment.
  • Alternatively, you can update the existing secret, redeploy APIcast by setting spec.apicast.productionSpec.replicas or spec.apicast.stagingSpec.replicas to 0, and then redploy APIcast again by setting spec.apicast.productionSpec.replicas or spec.apicast.stagingSpec.replicas back to its previous value.

2.7.3. Injecting custom policies with the 3scale operator

In a 3scale installation that uses embedded APIcast, you can use the 3scale operator to inject custom policies. Embedded APIcast is also referred to as managed or hosted APIcast. Injecting a custom policy adds the policy code to APIcast. You can then use either of the following to add the custom policy to an API product’s policy chain:

  • 3scale API
  • Product custom resource

To use the 3scale Admin Portal to add the custom policy to a product’s policy chain, you must also register the custom policy’s schema with a CustomPolicyDefinition custom resource. Custom policy registration is a requirement only when you want to use the Admin Portal to configure a product’s policy chain.

You can inject a custom policy as part of or after 3scale installation. After injecting a custom policy and after 3scale installation, you can remove a custom policy by removing its specification from the APIManager CR. The 3scale operator reconciles the changes.

Prerequisites

  • You are installing or you previously installed the 3scale operator.
  • You have defined a custom policy as described in Write your own policy. That is, you have already created, for example, the my-policy.lua, apicast-policy.json, and init.lua files that define a custom policy,

Procedure

  1. Create a secret from the files that define one custom policy. For example:

    oc create secret generic my-first-custom-policy-secret \
     --from-file=./apicast-policy.json \
     --from-file=./init.lua \
     --from-file=./my-first-custom-policy.lua

    If you have more than one custom policy, create a secret for each custom policy. A secret can contain only one custom policy.

  2. Define an APIManager custom resource that references each secret that contains a custom policy. You can specify the same secret for APIcast staging and APIcast production. The following example shows only content relative to referencing secrets that contain custom policies.

    apiVersion: apps.3scale.net/v1alpha1
    kind: APIManager
    metadata:
      name: apimanager-apicast-custom-policy
    spec:
      apicast:
        stagingSpec:
          customPolicies:
            - name: my-first-custom-policy
              version: "0.1"
              secretRef:
                name: my-first-custom-policy-secret
            - name: my-second-custom-policy
              version: "0.1"
              secretRef:
                name: my-second-custom-policy-secret
        productionSpec:
          customPolicies:
            - name: my-first-custom-policy
              version: "0.1"
              secretRef:
                name: my-first-custom-policy-secret
            - name: my-second-custom-policy
              version: "0.1"
              secretRef:
                name: my-second-custom-policy-secret

    An APIManager custom resource can reference multiple secrets that define different custom policies. The operator loads each custom policy.

  3. Create the APIManager custom resource that references the secrets that contain the custom policies. For example:

    oc apply -f apimanager.yaml

Next steps

You cannot update the content of a secret that defines a custom policy. If you need to update the custom policy you can do either of the following:

  • The recommended option is to create a secret with a different name and update the APIManager custom resource customPolicies section to refer to the new secret. The operator triggers a rolling update and loads the updated custom policy.
  • Alternatively, you can update the existing secret, redeploy APIcast by setting spec.apicast.productionSpec.replicas or spec.apicast.stagingSpec.replicas to 0, and then redploy APIcast again by setting spec.apicast.productionSpec.replicas or spec.apicast.stagingSpec.replicas back to its previous value.

2.7.4. Configuring OpenTracing with the 3scale operator

In a 3scale installation that uses embedded APIcast, you can use the 3scale operator to configure OpenTracing. You can configure OpenTracing in the staging or production environments or both environments. By enabling OpenTracing, you get more insight and better observability on the APIcast instance.

Prerequisites

Procedure

  1. Define a secret that contains your OpenTracing configuration details in stringData.config. This is the only valid value for the attribute that contains your OpenTracing configuration details. Any other specification prevents APIcast from receiving your OpenTracing configuration details. The folowing example shows a valid secret definition:

    apiVersion: v1
    kind: Secret
    metadata:
      name: myjaeger
    stringData:
      config: |-
          {
          "service_name": "apicast",
          "disabled": false,
          "sampler": {
            "type": "const",
            "param": 1
          },
          "reporter": {
            "queueSize": 100,
            "bufferFlushInterval": 10,
            "logSpans": false,
            "localAgentHostPort": "jaeger-all-in-one-inmemory-agent:6831"
          },
          "headers": {
            "jaegerDebugHeader": "debug-id",
            "jaegerBaggageHeader": "baggage",
            "TraceContextHeaderName": "uber-trace-id",
            "traceBaggageHeaderPrefix": "testctx-"
          },
          "baggage_restrictions": {
              "denyBaggageOnInitializationFailure": false,
              "hostPort": "127.0.0.1:5778",
              "refreshInterval": 60
          }
          }
    type: Opaque
  2. Create the secret. For example, if you saved the previous secret definition in the myjaeger.yaml file, you would run the following command:

    oc create secret generic myjaeger --from-file myjaeger.yaml
  3. Define an APIManager custom resource that specifies OpenTracing attributes. In the CR definition, set the openTracing.tracingConfigSecretRef.name attribute to the name of the secret that contains your OpenTracing configuration details. The following example shows only content relative to configuring OpenTracing.

    apiVersion: apps.3scale.net/v1alpha1
    kind: APIManager
    metadata:
      name: apimanager1
    spec:
      apicast:
        stagingSpec:
          ...
          openTracing:
            enabled: true
            tracingLibrary: jaeger
            tracingConfigSecretRef:
              name: myjaeger
        productionSpec:
          ...
            openTracing:
              enabled: true
              tracingLibrary: jaeger
              tracingConfigSecretRef:
                name: myjaeger
  4. Create the APIManager custom resource that configures OpenTracing. For example, if you saved the APIManager custom resource in the apimanager1.yaml file, you would run the following command:

    oc apply -f apimanager1.yaml

Next steps

Depending on how OpenTracing is installed, you should see the traces in the Jaeger service user interface.

2.7.5. Enabling TLS at the pod level with the 3scale operator

3scale deploys two APIcast instances, one for production and the other for staging. TLS can be enabled for only production or only staging, or for both instances.

Prerequisites

  • A valid certificate for enabling TLS.

Procedure

  1. Create a secret from your valid certificate, for example:

    oc create secret tls mycertsecret --cert=server.crt --key=server.key

    The configuration exposes secret references in the APIManager CRD. You create the secret and then reference the name of the secret in the APIManager custom resource as follows:

    • Production: The APIManager CR exposes the certificate in the .spec.apicast.productionSpec.httpsCertificateSecretRef field.
    • Staging: The APIManager CR exposes the certificate in the .spec.apicast.stagingSpec.httpsCertificateSecretRef field.

      Optionally, you can configure the following:

    • httpsPort indicates which port APIcast should start listening on for HTTPS connections. If this clashes with the HTTP port APIcast uses this port for HTTPS only.
    • httpsVerifyDepth defines the maximum length of the client certificate chain.

      Note

      Provide a valid certificate and reference from the APImanager CR. If the configuration can access httpsPort but not httpsCertificateSecretRef, APIcast uses an embedded self-signed certificate. This is not recommended.

  2. Click Operators > Installed Operators.
  3. From the list of Installed Operators, click 3scale Operator.
  4. Click the API Manager tab.
  5. Click Create APIManager.
  6. Add the following YAML definitions to the editor.

    1. If enabling for production, configure the following YAML defintions:

      spec:
        apicast:
          productionSpec:
            httpsPort: 8443
            httpsVerifyDepth: 1
            httpsCertificateSecretRef:
              name: mycertsecret
    2. If enabling for staging, configure the following YAML defintions:

      spec:
        apicast:
          stagingSpec:
            httpsPort: 8443
            httpsVerifyDepth: 1
            httpsCertificateSecretRef:
              name: mycertsecret
  7. Click Create.

2.7.6. Proof of concept for evaluation deployment

The following sections describe the configuration options applicable to the proof of concept for an evaluation deployment of 3scale. This deployment uses internal databases as default.

Important

The configuration for external databases is the standard deployment option for production environments.

2.7.6.1. Default deployment configuration

  • Containers will have Kubernetes resource limits and requests.

    • This ensures a minimum performance level.
    • It limits resources to allow external services and allocation of solutions.
  • Deployment of internal databases.
  • File storage will be based on Persistence Volumes (PV).

    • One will require read, write, execute (RWX) access mode.
    • OpenShift configured to provide them upon request.
  • Deploy MySQL as the internal relational database.

The default configuration option is suitable for proof of concept (PoC) or evaluation by a customer.

One, many, or all of the default configuration options can be overriden with specific field values in the APIManager custom resource. The 3scale operator allows all available combinations whereas templates allow fixed deployment profiles. For example, the 3scale operator allows deployment of 3scale in evaluation mode and external databases mode. Templates do not allow this specific deployment configuration. Templates are only available for the most common configuration options.

2.7.6.2. Evaluation installation

For and evaluation installtion, containers will not have kubernetes resource limits and requests specified. For example:

  • Small memory footprint
  • Fast startup
  • Runnable on laptop
  • Suitable for presale/sales demos
apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  wildcardDomain: lvh.me
  resourceRequirementsEnabled: false

Additional resources

  • See APIManager custom resource for more information.

2.7.7. External databases installation

An external databases installation is suitable for production use where high availability (HA) is a requirement or where you plan to reuse your own databases.

Important

When enabling the 3scale external databases installation mode, all of the following databases are externalized:

  • backend-redis
  • system-redis
  • system-database (mysql, postgresql, or oracle)

3scale 2.8 and above has been tested is supported with the following database versions:

DatabaseVersion

Redis

5.0

MySQL

5.7

PostgreSQL

10.6

Before creating APIManager custom resource to deploy 3scale, you must provide the following connection settings for the external databases using OpenShift secrets.

2.7.7.1. Backend Redis secret

Deploy two external Redis instances and fill in the connection settings as shown in the following example:

apiVersion: v1
kind: Secret
metadata:
  name: backend-redis
stringData:
  REDIS_STORAGE_URL: "redis://backend-redis-storage"
  REDIS_STORAGE_SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379"
  REDIS_STORAGE_SENTINEL_ROLE: "master"
  REDIS_QUEUES_URL: "redis://backend-redis-queues"
  REDIS_QUEUES_SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379"
  REDIS_QUEUES_SENTINEL_ROLE: "master"
type: Opaque

The Secret name must be backend-redis.

2.7.7.2. System Redis secret

Deploy two external Redis instances and fill in the connection settings as shown in the following example:

apiVersion: v1
kind: Secret
metadata:
  name: system-redis
stringData:
  URL: "redis://system-redis"
  SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379"
  SENTINEL_ROLE: "master"
  NAMESPACE: ""
  MESSAGE_BUS_URL: "redis://system-redis-messagebus"
  MESSAGE_BUS_SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379"
  MESSAGE_BUS_SENTINEL_ROLE: "master"
  MESSAGE_BUS_NAMESPACE: ""
type: Opaque

The Secret name must be system-redis.

2.7.7.3. System database secret

Note

The Secret name must be system-database.

When you are deploying 3scale, you have three alternatives for your system database. Configure different attributes and values for each alternative’s related secret.

  • MySQL
  • PostgreSQL
  • Oracle Database

To deploy a MySQL, PostgreSQL, or an Oracle Database system database secret, fill in the connection settings as shown in the following examples:

MySQL system database secret

apiVersion: v1
kind: Secret
metadata:
  name: system-database
stringData:
  URL: "mysql2://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}"
type: Opaque

PostgreSQL system database secret

apiVersion: v1
kind: Secret
metadata:
  name: system-database
stringData:
  URL: "postgresql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}"
type: Opaque

Oracle system database secret

apiVersion: v1
kind: Secret
metadata:
  name: system-database
stringData:
  URL: "oracle-enhanced://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}"
  ORACLE_SYSTEM_PASSWORD: "{SYSTEM_PASSWORD}"
type: Opaque

Note
  • The Oracle system user executes commands with system privileges. Some are detailed in this GitHub repository. The latest can be executed in the Oracle Database initializer when the tables are initialized in the database. There may be other commands executed not listed in these links.
  • The system user is also required for upgrades when there are any schema migrations to run, so other commands not included in the previous links may be executed.
  • Disclaimer: Links contained in this note to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.

2.7.7.4. Zync database secret

In a zync database setup, when HighAvailability is enabled, and if the externalZyncDatabaseEnabled field is also enabled, the user has to pre-create a secret named zync. Then set zync with the DATABASE_URL and DATABASE_PASSWORD fields with the values pointing to your externally database. The external database must be in high-availability mode. See the following example:

apiVersion: v1
kind: Secret
metadata:
  name: zync
stringData:
  DATABASE_URL: postgresql://<zync-db-user>:<zync-db-password>@<zync-db-host>:<zync-db-port>/zync_production
  ZYNC_DATABASE_PASSWORD: <zync-db-password>
type: Opaque

2.7.7.5. APIManager custom resources to deploy 3scale

Note
  • When you enable highAvailability, you must pre-create the backend-redis, system-redis, and system-database secrets.
  • When you enable highAvailability and the externalZyncDatabaseEnabled fields together, you must pre-create the zync database secret.

    • Choose only one type of database to externalize in the case of system-database.

Configuration of the APIManager custom resource will depend on whether or not your choice of database is external to your 3scale deployment.

If your backend Redis, system Redis, and system database will be external to 3scale, the APIManager custom resource must have highAvailability set to true. See the following example:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  wildcardDomain: lvh.me
  highAvailability:
    enabled: true

If your zync database will be external, the APIManager custom resource must have highAvailability set to true and externalZyncDatabaseEnabled must also be set to true. See the following example:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  wildcardDomain: lvh.me
  highAvailability:
    enabled: true
    externalZyncDatabaseEnabled: true

2.7.8. Amazon Simple Storage Service 3scale Filestorage installation

The following examples show 3scale FileStorage using Amazon Simple Storage Service (Amazon S3) instead of persistent volume claim (PVC).

Before creating APIManager custom resource to deploy 3scale, connection settings for the S3 service needs to be provided using an openshift secret.

2.7.8.1. Amazon S3 secret

Note

AN AWS S3 compatible provider can be configured in the S3 secret with AWS_HOSTNAME, AWS_PATH_STYLE, and AWS_PROTOCOL optional keys. See the S3 secret reference for more details.

In the following example, Secret name can be anything, as it will be referenced in the APIManager custom resource.

kind: Secret
metadata:
  creationTimestamp: null
  name: aws-auth
stringData:
  AWS_ACCESS_KEY_ID: 123456
  AWS_SECRET_ACCESS_KEY: 98765544
  AWS_BUCKET: mybucket.example.com
  AWS_REGION: eu-west-1
type: Opaque
Note

Amazon S3 region and Amazon S3 bucket settings are provided directly in the APIManager custom resource. The Amazon S3 secret name is provided directly in the APIManager custom resource.

Lastly, create the APIManager custom resource to deploy 3scale.

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  wildcardDomain: lvh.me
  system:
    fileStorage:
      simpleStorageService:
        configurationSecretRef:
          name: aws-auth

Check APIManager SystemS3Spec for reference.

2.7.9. PostgreSQL installation

A MySQL internal relational database is the default deployment. This deployment configuration can be overriden to use PostgreSQL instead.

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  wildcardDomain: lvh.me
  system:
    database:
      postgresql: {}

Additional resources

2.7.10. Customizing compute resource requirements at component level

Customize Kubernetes Compute Resource Requirements in your 3scale solution through the APIManager custom resource attributes. Do this to customize compute resource requirements, which is CPU and memory, assigned to a specific APIManager component.

The following example outlines how to customize compute resource requirements for the system-master’s system-provider container, for the backend-listener and for the zync-database:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  backend:
    listenerSpec:
      resources:
        requests:
          memory: "150Mi"
          cpu: "300m"
        limits:
          memory: "500Mi"
          cpu: "1000m"
  system:
    appSpec:
      providerContainerResources:
        requests:
          memory: "111Mi"
          cpu: "222m"
        limits:
          memory: "333Mi"
          cpu: "444m"
  zync:
    databaseResources:
      requests:
        memory: "111Mi"
        cpu: "222m"
      limits:
        memory: "333Mi"
        cpu: "444m"

Additional resources

See APIManager CRD reference for more information about how to specify component-level custom resource requirements.

2.7.10.1. Default APIManager components compute resources

When you configure the APIManager spec.resourceRequirementsEnabled attribute as true, the default compute resources are set for the APIManager components.

The specific compute resources default values that are set for the APIManager components are shown in the following table.

2.7.10.1.1. CPU and memory units

The following list explains the units you will find mentioned in the compute resources default values table. For more information on CPU and memory units, see Managing Resources for Containers.

Resource units explanation

  • m - milliCPU or millicore
  • Mi - mebibytes
  • Gi - gibibyte
  • G - gigabyte

Table 2.2. Compute resources default values

ComponentCPU requestsCPU limitsMemory requestsMemory limits

system-app’s system-master

50m

1000m

600Mi

800Mi

system-app’s system-provider

50m

1000m

600Mi

800Mi

system-app’s system-developer

50m

1000m

600Mi

800Mi

system-sidekiq

100m

1000m

500Mi

2Gi

system-sphinx

80m

1000m

250Mi

512Mi

system-redis

150m

500m

256Mi

32Gi

system-mysql

250m

No limit

512Mi

2Gi

system-postgresql

250m

No limit

512Mi

2Gi

backend-listener

500m

1000m

550Mi

700Mi

backend-worker

150m

1000m

50Mi

300Mi

backend-cron

50m

150m

40Mi

80Mi

backend-redis

1000m

2000m

1024Mi

32Gi

apicast-production

500m

1000m

64Mi

128Mi

apicast-staging

50m

100m

64Mi

128Mi

zync

150m

1

250M

512Mi

zync-que

250m

1

250M

512Mi

zync-database

50m

250m

250M

2G

2.7.11. Customizing node affinity and tolerations at component level

Customize Kubernetes Affinity and Tolerations in your Red Hat 3scale API Management solution through the APIManager custom resource attributes to customize where and how the different 3scale components of an installation are scheduled onto Kubernetes Nodes.

The following example sets a custom node affinity for the backend. It also sets listener and custom tolerations for the system-memcached:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  backend:
    listenerSpec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: "kubernetes.io/hostname"
                operator: In
                values:
                - ip-10-96-1-105
              - key: "beta.kubernetes.io/arch"
                operator: In
                values:
                - amd64
  system:
    memcachedTolerations:
    - key: key1
      value: value1
      operator: Equal
      effect: NoSchedule
    - key: key2
      value: value2
      operator: Equal
      effect: NoSchedule

Additional resources

See APIManager CDR reference for a full list of attributes related to affinity and tolerations.

2.7.12. Reconciliation

Once 3scale has been installed, the 3scale operator enables updating a given set of parameters from the custom resource to modify system configuration options. Modifications are made by hot swapping, that is, without stopping or shutting down the system.

Not all the parameters of the APIManager custom resource definitions (CRDs) are reconcilable.

The following is a list of reconcilable parameters:

2.7.12.1. Resources

Resource limits and requests for all 3scale components.

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  ResourceRequirementsEnabled: true/false

2.7.12.2. Backend replicas

Backend components pod count.

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  backend:
    listenerSpec:
      replicas: X
    workerSpec:
      replicas: Y
    cronSpec:
      replicas: Z

2.7.12.3. APIcast replicas

APIcast staging and production components pod count.

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  apicast:
    productionSpec:
      replicas: X
    stagingSpec:
      replicas: Z

2.7.12.4. System replicas

System app and system sidekiq components pod count

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  system:
    appSpec:
      replicas: X
    sidekiqSpec:
      replicas: Z

2.7.12.5. Zync replicas

Zync app and que components pod count

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  zync:
    appSpec:
      replicas: X
    queSpec:
      replicas: Z

2.8. Installing 3scale with the operator using Oracle as the system database

As a Red Hat 3scale API Management administrator, you can install the 3scale with the operator using the Oracle Database. By default, 3scale 2.11 has a component called system that stores configuration data in a MySQL database. You can override the default database and store your information in an external Oracle Database. Follow the steps below to build a custom system container image with your own Oracle Database client binaries and deploy 3scale to OpenShift.

Note

Prerequisites

To install 3scale with the operator using Oracle as the system database, use the following steps:

Procedure

  1. Download 3scale OpenShift templates from the GitHub repository and extract the archive:

    tar -xzf 3scale-amp-openshift-templates-3scale-2.11.1-GA.tar.gz
  2. Follow the prerequisites in Setting up your 3scale system image with an Oracle Database.

    Note

    If the client packages versions downloaded and stored locally do not match with the ones 3scale expects, 3scale will automatically download and use the appropriate ones in the following steps.

  3. Place your Oracle Database Instant Client Package files into the 3scale-amp-openshift-templates-3scale-2.11.1-GA/amp/system-oracle/oracle-client-files directory.
  4. Login to your registry.redhat.io account using the credentials you created in Creating a Registry Service Account.

    docker login registry.redhat.io
  5. Build the custom system Oracle-based image. The image tag must be a fixed image tag as in the following example:

    docker build . --tag myregistry.example.com/system-oracle:2.11.0-1
  6. Push the system Oracle-based image to a container registry accessible by the OCP cluster. This container registry is where your 3scale solution is going to be installed:

    docker push myregistry.example.com/system-oracle:2.11.0-1
  7. Set up the Oracle Database URL connection string and Oracle Database system password by creating the system-database secret with the corresponding fields. See, External databases installation for the Oracle Database.
  8. Install your 3scale solution by creating an APIManager custom resource. Follow the instructions in Deploying 3scale using the operator.

    • The APIManager custom resource must specify the .spec.system.image field set to the system’s Oracle-based image you previous built:

      apiVersion: apps.3scale.net/v1alpha1
      kind: APIManager
      metadata:
        name: example-apimanager
      spec:
       imagePullSecrets:
        - name: threescale-registry-auth
        - name: custom-registry-auth
       system:
        image: "myregistry.example.com/system-oracle:2.11.0-1"
       highAvailability:
        enabled: true

2.9. Troubleshooting common 3scale installation issues

This section contains a list of common installation issues and provides guidance for their resolution.

2.9.1. Previous deployment leaving dirty persistent volume claims

Problem

A previous deployment attempt leaves a dirty Persistent Volume Claim (PVC) causing the MySQL container to fail to start.

Cause

Deleting a project in OpenShift does not clean the PVCs associated with it.

Solution

Procedure

  1. Find the PVC containing the erroneous MySQL data with the oc get pvc command:

    # oc get pvc
    NAME                    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
    backend-redis-storage   Bound     vol003    100Gi      RWO,RWX       4d
    mysql-storage           Bound     vol006    100Gi      RWO,RWX       4d
    system-redis-storage    Bound     vol008    100Gi      RWO,RWX       4d
    system-storage          Bound     vol004    100Gi      RWO,RWX       4d
  2. Stop the deployment of the system-mysql pod by clicking cancel deployment in the OpenShift UI.
  3. Delete everything under the MySQL path to clean the volume.
  4. Start a new system-mysql deployment.

2.9.2. Wrong or missing credentials of the authenticated image registry

Problem

Pods are not starting. ImageStreams show the following error:

! error: Import failed (InternalError): ...unauthorized: Please login to the Red Hat Registry

Cause

While installing 3scale on OpenShift 4.x, OpenShift fails to start pods because ImageStreams cannot pull the images they reference. This happens because the pods cannot authenticate against the registries they point to.

Solution

Procedure

  1. Type the following command to verify the configuration of your container registry authentication:

    $ oc get secret
    • If your secret exists, you will see the following output in the terminal:

      threescale-registry-auth          kubernetes.io/dockerconfigjson        1         4m9s
    • However, if you do not see the output, you must do the following:
  2. Use the credentials you previously set up while Creating a registry service account to create your secret.
  3. Use the steps in Configuring registry authentication in OpenShift, replacing <your-registry-service-account-username> and <your-registry-service-account-password> in the oc create secret command provided.
  4. Generate the threescale-registry-auth secret in the same namespace as the APIManager resource. You must run the following inside the <project-name>:

    oc project <project-name>
    oc create secret docker-registry threescale-registry-auth \
      --docker-server=registry.redhat.io \
      --docker-username="<your-registry-service-account-username>" \
      --docker-password="<your-registry-service-account-password>"
      --docker-email="<email-address>"
  5. Delete and recreate the APIManager resource:

    $ oc delete -f apimanager.yaml
    apimanager.apps.3scale.net "example-apimanager" deleted
    
    $ oc create -f apimanager.yaml
    apimanager.apps.3scale.net/example-apimanager created

Verification

  1. Type the following command to confirm that deployments have a status of Starting or Ready. The pods then begin to spawn:

    $ oc describe apimanager
    (...)
    Status:
      Deployments:
        Ready:
          apicast-staging
          system-memcache
          system-mysql
          system-redis
          zync
          zync-database
          zync-que
        Starting:
          apicast-production
          backend-cron
          backend-worker
          system-sidekiq
          system-sphinx
        Stopped:
          backend-listener
          backend-redis
          system-app
  2. Type the following command to see the status of each pod:

    $ oc get pods
    NAME                               READY   STATUS             RESTARTS   AGE
    3scale-operator-66cc6d857b-sxhgm   1/1     Running            0          17h
    apicast-production-1-deploy        1/1     Running            0          17m
    apicast-production-1-pxkqm         0/1     Pending            0          17m
    apicast-staging-1-dbwcw            1/1     Running            0          17m
    apicast-staging-1-deploy           0/1     Completed          0          17m
    backend-cron-1-deploy              1/1     Running            0          17m

2.9.3. Incorrectly pulling from the Docker registry

Problem

The following error occurs during installation:

svc/system-redis - 1EX.AMP.LE.IP:6379
  dc/system-redis deploys docker.io/rhscl/redis-32-rhel7:3.2-5.3
    deployment #1 failed 13 minutes ago: config change

Cause

OpenShift searches for and pulls container images by issuing the docker command. This command refers to the docker.io Docker registry instead of the registry.redhat.io Red Hat Ecosystem Catalog.

This occurs when the system contains an unexpected version of the Docker containerized environment.

Solution

Procedure

Use the appropriate version of the Docker containerized environment.

2.9.4. Permission issues for MySQL when persistent volumes are mounted locally

Problem

The system-msql pod crashes and does not deploy causing other systems dependant on it to fail deployment. The pod log displays the following error:

[ERROR] Cannot start server : on unix socket: Permission denied
[ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ?
[ERROR] Aborting

Cause

The MySQL process is started with inappropriate user permissions.

Solution

Procedure

  1. The directories used for the persistent volumes MUST have the write permissions for the root group. Having read-write permissions for the root user is not enough as the MySQL service runs as a different user in the root group. Execute the following command as the root user:

    chmod -R g+w /path/for/pvs
  2. Execute the following command to prevent SElinux from blocking access:

    chcon -Rt svirt_sandbox_file_t /path/for/pvs

2.9.5. Unable to upload logo or images

Problem

Unable to upload a logo - system-app logs display the following error:

Errno::EACCES (Permission denied @ dir_s_mkdir - /opt/system/public//system/provider-name/2

Cause

Persistent volumes are not writable by OpenShift.

Solution

Procedure

Ensure your persistent volume is writable by OpenShift. It should be owned by root group and be group writable.

2.9.6. Test calls not working on OpenShift

Problem

Test calls do not work after creation of a new service and routes on OpenShift. Direct calls via curl also fail, stating: service not available.

Cause

3scale requires HTTPS routes by default, and OpenShift routes are not secured.

Solution

Procedure

Ensure the secure route checkbox is clicked in your OpenShift router settings.

2.9.7. APIcast on a different project from 3scale failing to deploy

Problem

APIcast deploy fails (pod does not turn blue). You see the following error in the logs:

update acceptor rejected apicast-3: pods for deployment "apicast-3" took longer than 600 seconds to become ready

You see the following error in the pod:

Error synching pod, skipping: failed to "StartContainer" for "apicast" with RunContainerError: "GenerateRunContainerOptions: secrets \"apicast-configuration-url-secret\" not found"

Cause

The secret was not properly set up.

Solution

Procedure

When creating a secret with APIcast v3, specify apicast-configuration-url-secret:

oc create secret generic apicast-configuration-url-secret --from-literal=password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>

2.10. Additional resources