Chapter 5. 3scale AMP On-premises Installation Guide

In this guide you’ll learn how to install 3scale 2.2 (on-premises) on OpenShift using OpenShift templates.

5.1. Prerequisites

  • You must configure 3scale servers for UTC (Coordinated Universal Time).

5.2. 3scale AMP OpenShift Templates

Red Hat 3scale API Management Platform (AMP) 2.2 provides an OpenShift template. You can use this template to deploy AMP onto OpenShift Container Platform.

The 3scale AMP template is composed of the following:

  • Two built-in APIcast API gateways
  • One AMP admin portal and developer portal with persistent storage

5.3. System Requirements

The 3scale Api Management OpenShift template requires the following:

5.3.1. Environment Requirements

3scale API Management requires an environment specified in supported configurations.

Persistent Volumes:

  • 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence
  • 1 RWX (ReadWriteMany) persistent volume for CMS and System-app Assets

The RWX persistent volume must be configured to be group writable. Refer to the OpenShift documentation for a list of persistent volume types which support the required access modes.

5.3.2. Hardware Requirements

Hardware requirements depend on your usage needs. Red Hat recommends you test and configure your environment to meet your specific requirements. Following are the recommendations when configuring your environment for 3scale on OpenShift:

  • Compute optimized nodes for deployments on cloud environments (AWS c4.2xlarge or Azure Standard_F8).
  • Very large installations may require a separate node (AWS M4 series or Azure Av2 series) for Redis if memory requirements exceed your current node’s available RAM.
  • Separate nodes between routing and compute tasks.
  • Dedicate compute nodes to 3scale specific tasks.
  • Set the PUMA_WORKERS variable of the backend listener to the number of cores in your compute node.

5.4. Configure Nodes and Entitlements

Before you can deploy 3scale on OpenShift, you must configure your nodes and the entitlements required for your environment to fetch images from Red Hat.

Perform the following steps to configure entitlements:

  1. Install Red Hat Enterprise Linux (RHEL) on each of your nodes.
  2. Register your nodes with Red Hat using the Red Hat Subscription Manager (RHSM).
  3. Attach your nodes to your 3scale subscription using RHSM.
  4. Install OpenShift on your nodes, complying with the following requirements:

  5. Install the OpenShift command line interface.
  6. Enable access to the rhel-7-server-3scale-amp-2.2-rpms repository using the subscription manager:

    sudo subscription-manager repos --enable=rhel-7-server-3scale-amp-2.2-rpms
  7. Install the 3scale-amp-template AMP template. The template will be saved in /opt/amp/templates.

    sudo yum install 3scale-amp-template

5.5. Deploy the 3scale AMP on OpenShift using a Template

5.5.1. Prerequisites:

Follow these procedures to install AMP onto OpenShift using a .yml template:

5.5.2. Import the AMP Template

Perfrom the following steps to import the AMP template into your OpenShift cluster:

  1. From a terminal session log in to OpenShift:

    oc login
  2. Select your project, or create a new project:

    oc project <project_name>
    oc new-project <project_name>
  3. Enter the oc new-app command:

    • Specify the --file option with the path to the amp.yml file you downloaded as part of the configure nodes and entitlements section.
    • Specify the --param option with the WILDCARD_DOMAIN parameter set to the domain of your OpenShift cluster:
    • Optionally, specify the --param option with the WILDCARD_POLICY parameter set to subdomain to enable wildcard domain routing:

      Without Wildcard Routing:

      oc new-app --file /opt/amp/templates/amp.yml --param WILDCARD_DOMAIN=<WILDCARD_DOMAIN>

      With Wildcard Routing:

      oc new-app --file /opt/amp/templates/amp.yml --param WILDCARD_DOMAIN=<WILDCARD_DOMAIN> --param WILDCARD_POLICY=Subdomain
  4. The terminal will show the master and tenant URLs and credentials for your newly created AMP admin portal. This output should include the following information:

    • master admin username
    • master password
    • master token information
    • tenant username
    • tenant password
    • tenant token information

      Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.
      
      ...
      
      * With parameters:
       * ADMIN_PASSWORD=xXxXyz123 # generated
       * ADMIN_USERNAME=admin
       * TENANT_NAME=user
      
       ...
      
       * MASTER_NAME=master
       * MASTER_USER=master
       * MASTER_PASSWORD=xXxXyz123 # generated
      
       ...
      
      --> Success
      Access your application via route 'user-admin.3scale-project.example.com'
      Access your application via route 'master-admin.3scale-project.example.com'
      Access your application via route 'backend-user.3scale-project.example.com'
      Access your application via route 'user.3scale-project.example.com'
      Access your application via route 'api-user-apicast-staging.3scale-project.example.com'
      Access your application via route 'api-user-apicast-production.3scale-project.example.com'
      Access your application via route 'apicast-wildcard.3scale-project.example.com'
      
      ...

      Make a note of these details for future reference.

      Note

      You may need to wait a few minutes for AMP to fully deploy on OpenShift for your login and credentials to work.

More Information

For information about wildcard domains on OpenShift, visit Using Wildcard Routes (for a Subdomain).

5.5.3. Configure SMTP Variables (Optional)

OpenShift uses email to send notifications and invite new users. If you intend to use these features, you must provide your own SMTP server and configure SMTP variables in the SMTP config map.

Perform the following steps to configure the SMTP variables in the SMTP config map:

  1. If you are not already logged in, log in to OpenShift:

    oc login
  2. Configure variables for the SMTP config map. Use the oc patch command, specify the configmap and smtp objects, followed by the -p option and write the following new values in JSON for the following variables:

    Variable

    Description

    address

    Allows you to specify a remote mail server as a relay

    username

    Specify your mail server username

    password

    Specify your mail server password

    domain

    Specify a HELO domain

    port

    Specify the port on which the mail server is listening for new connections

    authentication

    Specify the authentication type of your mail server. Allowed values: plain ( sends the password in the clear), login (send password Base64 encoded), or cram_md5 (exchange information and a cryptographic Message Digest 5 algorithm to hash important information)

    openssl.verify.mode

    Specify how OpenSSL checks certificates when using TLS. Allowed values: none, peer, client_once, or fail_if_no_peer_cert.

    Example:

    oc patch configmap smtp -p '{"data":{"address":"<your_address>"}}'
    oc patch configmap smtp -p '{"data":{"username":"<your_username>"}}'
    oc patch configmap smtp -p '{"data":{"password":"<your_password>"}}'
  3. After you have set the configmap variables, redeploy the system-app, system-resque, and system-sidekiq pods:

    oc rollout latest dc/system-app
    oc rollout latest dc/system-resque
    oc rollout latest dc/system-sidekiq

5.6. 3scale AMP Template Parameters

Template parameters configure environment variables of the AMP yml template during and after deployment.

Name

Description

Default Value

Required?

AMP_RELEASE

AMP release tag.

2.2.0

yes

ADMIN_PASSWORD

A randomly generated AMP administrator account password.

N/A

yes

ADMIN_USERNAME

AMP administrator account username.

admin

yes

APICAST_ACCESS_TOKEN

Read Only Access Token that APIcast will use to download its configuration.

N/A

yes

ADMIN_ACCESS_TOKEN

Admin Access Token with all scopes and write permissions for API access.

N/A

no

WILDCARD_DOMAIN

Root domain for the wildcard routes. For example, a root domain example.com will generate 3scale-admin.example.com.

N/A

yes

WILDCARD_POLICY

Enable wildcard routes to built-in APIcast gateways by setting the value as "Subdomain"

none

yes

TENANT_NAME

Tenant name under the root that Admin UI will be available with -admin suffix.

3scale

yes

MYSQL_USER

Username for MySQL user that will be used for accessing the database.

mysql

yes

MYSQL_PASSWORD

Password for the MySQL user.

N/A

yes

MYSQL_DATABASE

Name of the MySQL database accessed.

system

yes

MYSQL_ROOT_PASSWORD

Password for Root user.

N/A

yes

SYSTEM_BACKEND_USERNAME

Internal 3scale API username for internal 3scale api auth.

3scale_api_user

yes

SYSTEM_BACKEND_PASSWORD

Internal 3scale API password for internal 3scale api auth.

N/A

yes

REDIS_IMAGE

Redis image to use

registry.access.redhat.com/rhscl/redis-32-rhel7:3.2

yes

MYSQL_IMAGE

Mysql image to use

registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7-5

yes

SYSTEM_BACKEND_SHARED_SECRET

Shared secret to import events from backend to system.

N/A

yes

SYSTEM_APP_SECRET_KEY_BASE

System application secret key base

N/A

yes

APICAST_MANAGEMENT_API

Scope of the APIcast Management API. Can be disabled, status or debug. At least status required for health checks.

status

no

APICAST_OPENSSL_VERIFY

Turn on/off the OpenSSL peer verification when downloading the configuration. Can be set to true/false.

false

no

APICAST_RESPONSE_CODES

Enable logging response codes in APIcast.

true

no

APICAST_REGISTRY_URL

A URL which resolves to the location of APIcast policies

http://apicast-staging:8090/policies

yes

MASTER_USER

Master administrator account username

master

yes

MASTER_NAME

The subdomain value for the master admin portal, will be appended with the -master suffix

master

yes

MASTER_PASSWORD

A randomly generated master administrator password

N/A

yes

MASTER_ACCESS_TOKEN

A token with master level permissions for API calls

N/A

yes

5.7. Use APIcast with AMP on OpenShift

APIcast with AMP on OpenShift differs from APIcast with AMP hosted and requires unique configuration procedures.

This section explains how to deploy APIcast with AMP on OpenShift.

5.7.1. Deploy APIcast Templates on an Existing OpenShift Cluster Containing your AMP

AMP OpenShift templates contain two built-in APIcast API gateways by default. If you require more API gateways, or require separate APIcast deployments, you can deploy additional APIcast templates onto your OpenShift cluster.

Perform the following steps to deploy additional API gateways onto your OpenShift cluster:

  1. Create an access token with the following configurations:

    • scoped to Account Management API
    • has read-only access
  2. Log in to your APIcast Cluster:

    oc login
  3. Create a secret that allows APIcast to communicate with AMP. Specify new-basicauth, apicast-configuration-url-secret, and the --password parameter with the access token, tenant name, and wildcard domain of your AMP deployment:

    oc secret new-basicauth apicast-configuration-url-secret --password=https://<APICAST_ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
    Note

    TENANT_NAME is the name under the root that Admin UI will be available with. The default value for TENANT_NAME "3scale". If you used a custom value in your AMP deployment then you must use that value here.

  4. Import the APIcast template by downloading the apicast.yml, located on the 3scale GitHub, and running the oc new-app command, specifying the --file option with the apicast.yml file:

    oc new-app --file /path/to/file/apicast.yml

5.7.2. Connect APIcast from an OpenShift Cluster Outside of an OpenShift Cluster Containing your AMP

If you deploy APIcast onto a different OpenShift cluster, outside of your AMP cluster, you must connect over the public route.

  1. Create an access token with the following configurations:

    • scoped to Account Management API
    • has read-only access
  2. Log in to your APIcast Cluster:

    oc login
  3. Create a secret that allows APIcast to communicate with AMP. Specify new-basicauth, apicast-configuration-url-secret, and the --password parameter with the access token, tenant name, and wildcard domain of your AMP deployment:

    oc secret new-basicauth apicast-configuration-url-secret --password=https://<APICAST_ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
    Note

    TENANT_NAME is the name under the root that Admin UI will be available with. The default value forTENANT_NAME is "3scale". If you used a custom value in your AMP deployment then you must use that value here.

  4. Deploy APIcast onto an OpenShift cluster outside of the OpenShift Cluster with the oc new-app command. Specify the --file option and the file path of your apicast.yml file:

    oc new-app --file /path/to/file/apicast.yml
  5. Update the apicast BACKEND_ENDPOINT_OVERRIDE environment variable set to the URL backend. followed by the wildcard domain of the OpenShift Cluster containing your AMP deployment:

    oc env dc/apicast --overwrite BACKEND_ENDPOINT_OVERRIDE=https://backend-<TENANT_NAME>.<WILDCARD_DOMAIN>

5.7.3. Connect APIcast from Other Deployments

Once you have deployed APIcast on other platforms, you can connect them to AMP on OpenShift by configuring the BACKEND_ENDPOINT_OVERRIDE environment variable in your AMP OpenShift Cluster:

  1. Log in to your AMP OpenShift Cluster:

    oc login
  2. Configure the system-app object BACKEND_ENDPOINT_OVERRIDE environment variable:

    If you are using a native installation:

    BACKEND_ENDPOINT_OVERRIDE=https://backend.<your_openshift_subdomain> bin/apicast

    If are using the Docker containerized environment:

    docker run -e BACKEND_ENDPOINT_OVERRIDE=https://backend.<your_openshift_subdomain>

5.7.4. Change Built-In APIcast Default Behavior

In external APIcast deployments, you can modify default behavior by changing the template parameters in the APIcast OpenShift template.

In built-in APIcast deployments, AMP and APIcast are deployed from a single template. You must modify environment variables after deployment if you wish to change the default behavior for the built-in APIcast deployments.

5.7.5. Connect Multiple APIcast Deployments on a Single OpenShift Cluster over Internal Service Routes

If you deploy multiple APIcast gateways into the same OpenShift cluster, you can configure them to connect using internal routes through the backend listener service instead of the default external route configuration.

You must have an OpenShift SDN plugin installed to connect over internal service routes. How you connect depends on which SDN you have installed.

ovs-subnet

If you are using the ovs-subnet OpenShift SDN plugin, follow these steps to connect over internal routes:

  1. If not already logged in, log in to your OpenShift Cluster:

    oc login
  2. Enter the oc new-app command with the path to the apicast.yml file:

    • Specify the --param option with the BACKEND_ENDPOINT_OVERRIDE parameter set to the domain of your OpenShift cluster’s AMP project:
oc new-app -f apicast.yml --param BACKEND_ENDPOINT_OVERRIDE=http://backend-listener.<AMP_PROJECT>.svc.cluster.local:3000

ovs-multitenant

If you are using the 'ovs-multitenant' Openshift SDN plugin, follow these steps to connect over internal routes:

  1. If not already logged in, log in to your OpenShift Cluster:

    oc login
  2. As admin, specify the oadm command with the pod-network and join-projects options to set up communication between both projects:

    oadm pod-network join-projects --to=<AMP_PROJECT> <APICAST_PROJECT>
  3. Enter the oc new-app cotion with the path to the apicast.yml file

    • Specify the --param option with the BACKEND_ENDPOINT_OVERRIDE parameter set to the domain of your OpenShift cluster’s AMP project:
oc new-app -f apicast.yml --param BACKEND_ENDPOINT_OVERRIDE=http://backend-listener.<AMP_PROJECT>.svc.cluster.local:3000

More information

For information on Openshift SDN and project network isolation, see: Openshift SDN

5.8. 7. Troubleshooting

This section contains a list of common installation issues and provides guidance for their resolution.

5.8.1. Previous Deployment Leaves Dirty Persistent Volume Claims

Problem

A previous deployment attempt leaves a dirty Persistent Volume Claim (PVC) causing the MySQL container to fail to start.

Cause

Deleting a project in OpenShift does not clean the PVCs associated with it.

Solution

  1. Find the PVC containing the erroneous MySQL data with oc get pvc:

    # oc get pvc
    NAME                    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
    backend-redis-storage   Bound     vol003    100Gi      RWO,RWX       4d
    mysql-storage           Bound     vol006    100Gi      RWO,RWX       4d
    system-redis-storage    Bound     vol008    100Gi      RWO,RWX       4d
    system-storage          Bound     vol004    100Gi      RWO,RWX       4d
  2. Stop the deployment of the system-mysql pod by clicking cancel deployment in the OpenShift UI.
  3. Delete everything under the MySQL path to clean the volume.
  4. Start a new system-mysql deployment.

5.8.2. Incorrectly Pulling from the Docker Registry

Problem

The following error occurs during installation:

svc/system-redis - 1EX.AMP.LE.IP:6379
  dc/system-redis deploys docker.io/rhscl/redis-32-rhel7:3.2-5.3
    deployment #1 failed 13 minutes ago: config change

Cause

OpenShift searches for and pulls container images by issuing the docker command. This command refers to the docker.io Docker registry instead of the registry.access.redhat.com Red Hat container registry.

This occurs when the system contains an unexpected version of the Docker containerized environment.

Solution

Use the appropriate version of the Docker containerized environment.

5.8.3. Permissions Issues for MySQL when Persistent Volumes are Mounted Locally

Problem

The system-msql pod crashes and does not deploy causing other systems dependant on it to fail deployment. The pod log displays the following error:

[ERROR] Can't start server : on unix socket: Permission denied
[ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ?
[ERROR] Aborting

Cause

The MySQL process is started with inappropriate user permissions.

Solution

  1. The directories used for the persistent volumes MUST have the write permissions for the root group. Having rw permissions for the root user is not enough as the MySQL service runs as a different user in the root group. Execute the following command as the root user:

    chmod -R g+w /path/for/pvs
  2. Execute the following command to prevent SElinux from blocking access:

    chcon -Rt svirt_sandbox_file_t /path/for/pvs

5.8.4. Unable to Upload Logo or Images because Persistent Volumes are not Writable by OpenShift

Problem

Unable to upload a logo - system-app logs display the following error:

Errno::EACCES (Permission denied @ dir_s_mkdir - /opt/system/public//system/provider-name/2

Cause

Persistent volumes are not writable by OpenShift.

Solution

Ensure your persistent volume is writable by OpenShift. It should be owned by root group and be group writable.

5.8.5. Create Secure Routes on OpenShift

Problem

Test calls do not work after creation of a new service and routes on OpenShift. Direct calls via curl also fail, stating: service not available.

Cause

3scale requires HTTPS routes by default, and OpenShift routes are not secured.

Solution

Ensure the "secure route" checkbox is enabled in your OpenShift router settings.

5.8.6. APIcast on a Different Project from AMP Fails to Deploy due to Problem with Secrets

Problem

APIcast deploy fails (pod doesn’t turn blue). The following error appears in the logs:

update acceptor rejected apicast-3: pods for deployment "apicast-3" took longer than 600 seconds to become ready

The following error appears in the pod:

Error synching pod, skipping: failed to "StartContainer" for "apicast" with RunContainerError: "GenerateRunContainerOptions: secrets \"apicast-configuration-url-secret\" not found"

Cause

The secret was not properly set up.

Solution

When creating a secret with APIcast v3, specify apicast-configuration-url-secret:

oc secret new-basicauth apicast-configuration-url-secret  --password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>