Red Hat 3Scale 2.saas

For Use with Red Hat 3Scale 2.saas

Red Hat Customer Content Services


This guide documents deployment and infrastructure management with Red Hat 3Scale 2.saas.

Chapter 1. Run Zero-Infrastructure APIs On Amazon API gateway and 3scale

1.1. Prerequisites for this tutorial

1.2. Goals of this tutorial

This tutorial will show you how to add an API management layer to your existing API using:

  • Amazon API Gateway: for basic API traffic management
  • AWS Lambda: for implementing the logic behind your API
  • ElastiCache for caching API keys and improving performance
  • VPC for connecting AWS Lambda with ElastiCache
  • Serverless Framework for making configuration and deployment to Lambda a lot easier
  • 3scale API Management Platform for API contracts on tiered application plans, monetization, and developer portal features with interactive API documentation

Below are two overview diagrams that illustrate the components involved and their interactions. The first diagram shows what happens when a certain API endpoint is called for the first time together with a certain API key.

3scale Custom Authorizer FirstCall

Here is the flow for the first call:

  1. Amazon API Gateway checks the 3scale custom authorizer to see whether this call is authorized.
  2. The 3scale custom authorizer checks whether the authorization info is stored in the cache.
  3. Since it’s the first call, there is no info stored in the cache. So, the 3scale custom authorizer queries the 3scale API Management Platform, which returns whether this call is authorized or not.
  4. The 3scale custom authorizer updates the cache accordingly.
  5. The 3scale custom authorizer returns the authorization response to the Amazon API Gateway.
  6. If the call was positively authorized, the Amazon API Gateway directly queries the API backend, which in this case is a Lambda function.

The second diagram below shows what happens to every subsequent request to the same API endpoint with the same API key.

3scale Custom Authorizer SubsequentCalls

Here is the flow for every subsequent call:

  1. Amazon API Gateway checks with the 3scale custom authorizer to see whether this call is authorized.
  2. The 3scale custom authorizer checks whether the authorization info is stored in the cache. Since other calls have previously been executed, the cache has the authorization info stored.
  3. The 3scale custom authorizer returns the authorization response to the Amazon API Gateway.
  4. If the call was positively authorized, the Amazon API Gateway directly queries the API backend, which in our case is a Lambda function.
  5. The 3scale custom authorizer calls the 3scale async reporting function.
  6. The 3scale async reporting function reports the traffic back to the 3scale API Management Platform, which is used for API analytics.

1.3. (Optional) Create an API and deploy it to Amazon API Gateway

If you don’t yet have an API deployed on Amazon API Gateway, you can create one very easily using Serverless Framework. sls is the Serverless CLI, which you should have installed on your system as part of the prerequisites of this tutorial.


  1. Create a new directory sls-awstutorial (mkdir sls-awstutorial)
  2. Move into the new directory: cd sls-awstutorial
  3. Create a service (
serverless create --template aws-nodejs

) 4. It should have created two files: handler.js for the logic of the Lambda function, and serverless.yml for the configuration. 5. Create an endpoint: in serverless.yml file in functions section, replace code by following lines:

    handler: handler.hello
     - http:
         path: api/hello
         method: get
  1. Test the function locally running `sls invoke local -f hello`You should see the following result:
    "statusCode": 200,
    "body": "{\"message\":\"Go Serverless v1.0! Your function executed successfully!\",\"input\":\"\"}"

This is what we will returned by our API endpoint.7. Finally deploy this endpoint using: sls deploy. It will deploy the Lambda function and the API Gateway.

If it succeeded, it should give you the URL of the API created. You will use this API for the rest of the tutorial.

1.4. Deploy stack

For this integration, you’re going to use a lot of different services from the AWS stack. To simplify the deployment and the linking of this stack, you’re going to use CloudFormation.

If you’re not familiar with CloudFormation, it’s an AWS service that lets you describe in a JSON file all the AWS services you want to use and link them together. You can read more about CloudFormation here.

We’ve also bundled the CloudFormation stack into our Serverless project, so the Lambda functions can take advantage of CloudFormation.

The Lambda functions will call the 3scale API Management Platform to check whether calls to the API are authorized.

Serverless Framework is a great way to deploy Lambda functions easily. If you’re not familiar with it, check out their site. It’s basically a tool that helps you manage Lambda functions easily.

Follow these steps to deploy the 3scale stack:

  1. Clone this repo locally using the following commands:
git clone
cd awsThreeScale_Authorizer
  1. In the awsThreeScale_Authorizer folder, there are two different files:

    • handler.js - containing logic of two functions authorizer and authrepAsync
    • authorizer is the Lambda function that is called by the Amazon API Gateway to authorize incoming API calls (see the first diagram above).
    • authrepAsync is called by the authorizer function to sync with the 3scale API Management platform for API traffic reporting and analytics (see the second diagram above).
    • serverless.yml - configuration of Serveless project and Clouformation template

To check the CloudFormation settings you can look at the bottom of serverless.yml file under Resources section.

Before deploying this to AWS we need to complete a few more tasks.

  1. Install serverless project dependencies:

    npm install

This will install all the npm modules you need to run the functions.

  1. The logic of each Lambda function is kept in the handler.js file, but we don’t have to touch it. If you look at the code in this file you will see that we are using environment variables. So, let’s set them up: In the serverless.yml file modify the placeholder YOUR_THREESCALE_PROVIDER_KEY and YOUR_THREESCALE_SERVICE_ID with your own values under environment section.
   SERVERLESS_REGION: ${self:provider.region}
       - elasticCache
       - RedisEndpoint.Address
     Ref: SNStopic

You can find YOUR_THREESCALE_PROVIDER_KEY under Accounts tab in your 3scale Admin Portal.

3scale account

You can find YOUR_THREESCALE_SERVICE_ID under the APIs tab.

3scale service_id

You don’t need to change anything else in this file. Serverless and CloudFormation will populate the other environment variables.

  1. Finally, deploy your function and resources: sls deploy

This command may take a while as it’s deploying all the AWS services. At the end of the output you will see the names of the deployed resources

If everything went well, you are done with the coding part. You are ready to use 3scale on your API.

1.5. Add 3scale custom authorizer to Amazon API Gateway

You are now going to add the custom authorizer functions you just deployed to your existing API on the Amazon API Gateway.

To do so follow these steps:

  1. Go to the Amazon API Gateway console and select your API.
  2. You should see a section named Custom Authorizers in the menu on the left hand side. Click on it.
  3. Click on the Create button to create your custom authorizer.
  4. Name it threescale.

    Create a new custom authorizer in AWS console
  5. Choose the region where your Lambda has been deployed
  6. For the Lambda function field, look for and choose the authorizer function you have deployed earlier. (Just start typing and it should appear: ThreeScale-authorizer.)
  7. Under Identify token source modify it to method.request.header.apikey. It means that we are expecting developers to make a call to our API with a header apikey, and we will use this key to authenticate the request.
  8. Finally change TTL to 0.

    Configure the authorizer

We now have a custom authorizer, which is already handling caching.

Finally, we have to apply it to our API endpoints:

  1. Go to the Resources part of your API.
  2. Select a method, and click on the method request box.
  3. Change Authorization to the threescale custom authorizer you have created before and save.

    Authorization setting on endpoint
  4. Finally, re-deploy your API by clicking on the Actions button and then select Deploy API at the bottom.

You would have to reproduce these steps on each endpoint of your API to make sure your entire API is secured. But for now, you can limit it to a single endpoint.

1.6. Testing the whole flow end-to-end

You are almost done!

Test to see whether everything worked:

  1. Go to your 3scale Admin Portal.
  2. Take a valid API key. Any of them will do. Once you’re logged in to your 3scale account, go to the Applications section.
  3. Click on the default application.

    List of applications
  4. On the next screen, you’ll see details about this application such as which plan is associated with it and traffic over the last 30 days. You can look at those features later. For now, you’re only interested in the User Key. Copy it.

    Details of an application

Finally, to test the whole API flow end-to-end, including authorization via custom authorizer and 3scale API Management Platform, make a call to your API endpoint and include the API key as a header.

To do that, open a terminal and run the following command (you could also use a client like Postman):

    -H 'apikey: 3SCALE_API_KEY'

If you did everything correctly, you will see the result of your API call returned.

Now try with a non-valid key. Simply replace the API key with any random string. Hit the endpoint again. See? It does not work. The call is not authorized and an error response is returned.

Your API is now protected and only accessible to people with valid API keys.

1.7. Additional resources

1.7.1. Intro to the Amazon API Gateway custom authorizer principles

With the Amazon API Gateway custom authorizer, you can control access to your APIs using bearer token authentication strategies such as OAuth and SAML. To do so, you provide and configure a custom authorizer (basically your own Lambda function) for the Amazon API Gateway, which is then used to authorize client requests for the configured APIs. You can find all the details about how to do this in a dedicated Amazon API Gateway tutorial.

1.7.2. Async mechanism using Amazon Simple Notification Service

The 3scale custom authorizer function will be called every time a request comes into the Amazon API Gateway. It’s inefficient to call the 3scale API Management Platform every time to check whether a certain API key is authorized or not. That’s where ElastiCache comes in handy.

You implemented the logic of your custom authorizer such that the first time you see an API key, you will ask 3scale to authorize it. You then store the result in cache, so you can serve it next time the same API key is making another call.

All subsequent calls use the authRepAsync Lambda function to sync the cache with the 3scale API Management Platform.

This authRepAsync function is called by the main authorizer function using the Amazon Simple Notification Service (SNS). SNS is a notifications protocol available on AWS. A Lambda function can subscribe to a specific topic. Every time a message related to this topic is sent, the Lambda function is triggered.

sns schema

Chapter 2. API Deployment On Microsoft Azure

Since APIs are platform agnostic, they can be deployed on any platform. This tutorial is fast web API deployment on Microsoft Azure. You will use the Ruby Grape gem to create the API interface, an NGINX proxy, Thin server, and Capistrano to deploy using the command line.

For the purpose of this tutorial, you can use any Ruby-based API running on Thin server, or you can clone the Echo-API.

2.1. Create and configure Microsoft Azure VM

Start to generate a X509 certificate with a 2048-bit RSA keypair to ssh into your Azure VM. It will be useful when you will set up your VM.

To generate this type of key, you can run the following command:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem

Now, get started by creating your Microsoft Azure account. For this tutorial, you can use the free trial option. Once the Azure account is created, go to the Dashboard on the Virtual Machines tab. There, you will be guided to create your first VM. Choose the from gallery option and select an Ubuntu Server 12.04 LTS.

On step 2 you will be able to upload the pem you created earlier, you should not be prompted for your password again.

In steps 3 and 4, choose the options that best suit your needs.

It will take a couple of minutes for your VM to be ready. When it is, you will be able to access its dashboard where you can monitor activity (CPU, disk, network) of your VM and upgrade its size.

The VM comes with a few packages installed, so you’ll need to access it to install other components. Once the key is created, you can ssh to your VM.

ssh -i myPrivateKey.key -p 22

Once in the VM, run the following commands to install everything you need:

sudo apt-get -y update
sudo apt-get -y upgrade
sudo apt-get -y install ruby1.9.3 build-essential libsqlite3-dev libpcre3 libpcre3-dev libssl-dev openssl libreadline6 libreadline6-dev libxml2-dev libxslt1-dev

You can check that Ruby installation is complete by running:

ruby -v

It should output something like ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux].

You also need to install bundler and thin:

sudo gem install bundler
sudo gem install thin

Now, you should have everything you need on the VM. Go back to its dashboard and click on the endpoints tab. There, add the HTTP endpoint on port 80, and the fields should autofill.

2.2. Install OpenResty

In order to streamline this step, we recommend that you install the fantastic OpenResty web application. It’s the standard NGINX core bundled with almost all the necessary third-party NGINX modules built in.

On your Azure VM Compile and install NGINX:

cd ~
sudo wget
sudo tar -zxvf ngx_openresty-VERSION.tar.gz
cd ngx_openresty-VERSION/
sudo ./configure --prefix=/opt/openresty --with-luajit --with-http_iconv_module -j2
sudo make
sudo make install

2.3. Configure your GitHub repo

This tutorial uses GitHub to host the code. If you don’t already have a repo for your API, make sure to create one and host it on If you’re not familiar with Git and GitHub, check out this great tutorial.

To use Git on your VM and have access to your GitHub repo, you need to generate an SSH key on your VM and add it to Github as explained here.

2.3.1. Warning

Hosting your code on a public GitHub repo makes it vulnerable. Make sure it does not contain any sensitive information such as provider keys before pushing it publicly.

2.4. Configure your API

This is how the system will work:

  1. Thin server will be launched on port 8000.
  2. The upstream YOURAPINAME is listening on localhost:8000.
  3. Upcoming connections on port 80 (as defined in the server section) are "redirected" to YOURAPINAME.

2.4.1. On 3scale

Rather than reinvent the wheel and implement rate limits, access controls, and analytics from scratch, you’ll use 3scale. If you don’t have an account yet, sign up here, activate it, and log in to the new instance through the links provided. The first time you log in, choose the option for some sample data to be created, so you’ll have some API keys to use later. Go through the tour to get a glimpse of the systems functionality (optional) and then go ahead with implementation.

To get some instant results, start with the API gateway in the staging environment, which can be used while in development. Then configure an NGINX proxy, which can scale up for full production deployments.

There is some documentation on configuring the API proxy here and more advanced configuration options here.

Once you sign in to your 3scale account, launch your API on the main Dashboard screen or Go to API→Select the service (API)→Integration in the sidebar→Proxy

Proxy integration

Set the address of your API backend -

  1. After creating some app credentials in 3scale, you can test your API by hitting the staging API gateway endpoint:


where, XXX is specific to your staging API gateway and APP_ID and APP_KEY are the ID and key of one of the sample applications you created when you first logged in to your 3scale account. (If you missed that step, just create a developer account and an application within that account.)

Try it without app credentials, next with incorrect credentials. Then once authenticated, within and over any rate limits that you’ve defined. Once it’s working to your satisfaction, download the config files for NGINX.


Any time you have errors, check whether you can access the API directly: your-public-dns:3000/v1/words/awesome.json. If it’s not available, check whether the AWS instance is running and whether the Thin server is running on the instance.*

There, you will be able to change your API backend address to

Once you’re done, click on Download your nginx config. That will download an archive containing the .conf and .lua file you’re going to use to configure your app.

Modify the .conf accordingly:

If the API gateway and the API are on the same VM, delete the block:

server ....

…​and replace it with…​

upstream YOURAPINAME {

YOURAPINAME can only contain URL valid characters as defined in RFC 3986.

In the .lua file, modify the line ngx.var.proxy_pass = "".

With ngx.var.proxy_pass = "http://YOURAPINAME" in all cases.

Replace server_name; with


In the server block, add this on top:

root /home/USERNAME/apps/YOURAPINAME/current;
access_log /home/USERNAME/apps/YOURAPINAME/current/log/thin.log;
error_log /home/USERNAME/apps/YOURAPINAME/current/log/error.log;

Replace access_by_lua_file lua_tmp.lua;

…​with…​ access_by_lua_file /opt/openresty/nginx/conf/lua_tmp.lua;

Before post_action /out_of_band_authrep_action; add:

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;

Finally, rename those files nginx.conf and tmp_lua.lua.

2.4.2. Capistrano setup

Use Capistrano to deploy the API. Capistrano is an automation tool, which will let you set up tasks for your deployments and execute them using a command line interface. Capistrano is used on your local machine to deploy on your remote VM.

To install Capistrano, add this line to your gem file: gem 'capistrano'

Run the following command locally to install the new gems and set up Capistrano: bundle capify.

Copy nginx.conf and tmp_lua.lua into /config.

2.5. Capistrano setup

When you ran the capify command, you created two files, Capfile and deploy.rb. In deploy.rb, you describe all the commands necessary to deploy your app.

In /config edit deploy.rb and replace the content with the following:

require "bundler/capistrano"
set :application, "YOURAPINAME"
set :user,"USERNAME"
set :scm, :git
set :repository, ""
set :branch, "master"

set :use_sudo, false

server "VNDNSname", :web, :app, :db, primary: true

set :deploy_to, "/home/#{user}/apps/#{application}"
default_run_options[:pty] = true
ssh_options[:forward_agent] = false
ssh_options[:port] = 22
ssh_options[:keys] = ["/PATH/TO/myPrivateKey.key"]

namespace :deploy do
    task :start, :roles => [:web, :app] do
      run "cd #{deploy_to}/current && nohup bundle exec thin start -C config/production_config.yml -R"
      sudo "/opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /opt/openresty/nginx/conf/nginx.conf"

    task :stop, :roles => [:web, :app] do
      run "kill -QUIT cat /opt/openresty/nginx/logs/"
      run "cd #{deploy_to}/current && nohup bundle exec thin stop -C config/production_config.yml -R"

    task :restart, :roles => [:web, :app] do

    task :setup_config, roles: :app do
      sudo "ln -nfs #{current_path}/config/nginx.conf /opt/openresty/nginx/conf/nginx.conf"
      sudo "ln -nfs #{current_path}/config/lua_tmp.lua /opt/openresty/nginx/conf/lua_tmp.lua"
      sudo "mkdir -p #{shared_path}/config"
    after "deploy:setup", "deploy:setup_config"

This will ensure that Capistrano doesn’t try to run rake:migrate. (This is not a Rails project!)

task :cold do

In above text, replace the following:

  • VNDNSname with your DNS.
  • YOURAPINAME with your applicationame.
  • USERNAME with the username used to login into the VM.
  • GITHUBUSERNAME with your Github username.
  • REPO with your Github repo name.
  • /PATH/TO with the path to access the SSH key created before.

The above works well if you don’t have a database in your API. If you do have a database, comment the lines:

task :cold do

You also need to add a file production_config.yml in /config to configure the Thin server.

environment: production
chdir: /home/USERNAME/apps/YOURAPINAME/current/
port: 8000
pid: /home/USERNAME/apps/YOURAPINAME/current/tmp/
rackup: /home/USERNAME/apps/YOURAPINAME/current/
log: /home/USERNAME/apps/YOURAPINAME/current/log/thin.log
max_conns: 1024
timeout: 30
max_persistent_conns: 512
daemonize: true

Again, change usernames and paths accordingly.

Commit the changes on the project and upload them to GitHub.

git add .
git commit -m "adding config files"
git push

You are almost done.

2.6. Deploy

From your local development machine, run the following command to set up the remote Azure VM:

cap deploy:setup

You should not be prompted for a password if the path to your ssh key is correct.

Capistrano will connect to your VM and create an apps directory under the home directory of the user account.

Now, you can deploy your API to the VM and launch Thin server using the command: cap deploy:cold

This command should get the latest commit on your GitHub. Launch OpenResty and Thin server.

Your API should now be available on the URL:

2.6.1. Troubleshooting

If you are not able to access to your API, ssh to your VM and check that you can call it on localhost using curl. Like this:

 curl -X GET http://localhost:8000/v2/words/hello.json?app_id=APPID&app_key=APPKEY`

If it works, there is something wrong in nginx configuration.

You can check nginx logs on your VM with

cat /opt/openresty/nginx/logs/error.log

You should now have an API running on an Azure Linux instance.

Hope you enjoyed this tutorial. Please let us know if you have any questions or comments. We look forward to hearing from you.

Chapter 3. Deploy An API On Amazon EC2 For AWS Rookies

At 3scale we find Amazon to be a fantastic platform for running APIs due to the complete control you have on the application stack. However, for people new to AWS, the learning curve is quite steep. So we put together our best practices into this short tutorial. Besides Amazon EC2, we’ll use the Ruby Grape gem to create the API interface and an NGINX gateway to handle access control. Best of all everything in this tutorial is completely free.

3.1. Prerequisites

For the purpose of this tutorial you’ll need a running API based on Ruby and Thin server. If you don’t have one you can simply clone an example repo as described below in the “Deploying the Application” section.

We’ll begin with the creation and configuration of the Amazon EC2 instance. If you already have an EC2 instance (micro or not), you can jump to the next step, “Preparing Instance for Deployment”.

3.2. Create and configure EC2 instance

Start by signing up for the Amazon Elastic Compute Cloud (Amazon EC2). The free tier is enough to cover all your basic needs. Once the account is created, go to the EC2 dashboard under your AWS Management Console and click on the “launch instance” button. That will transfer you to a pop-up window where you’ll continue the process:

  • Choose the classic wizard
  • Choose an AMI (Ubuntu Server 12.04.1 LTS 32bit, T1micro instance) leaving all the other settings for “instance details” as default
  • Create a key pair and download it. This will be the key that you’ll use to make an ssh connection to the server. It’s VERY IMPORTANT!
  • Add inbound rules for the firewall with source always (HTTP, HTTPS, ALL ICMP, TCP port 3000 used by the Ruby Thin server)

3.3. Prepare instance for deployment

Once the instance is created and running, you can connect there directly from the console (Windows users from PuTTY). Right click on your instance, connect, and choose Connect with a standalone SSH Client.

Connecting to the Amazon Instance

Follow the steps and change the username to “ubuntu” (instead of “root”) in the given example.



After executing this step you are connected to your instance. You’ll have to install new packages. Some of them require root credentials, so you’ll have to set a new root password: sudo passwd root. Then login as root: su root.

Now with root credentials, execute: sudo apt-get update

Switch back to your normal user with exit command and install all required packages:

  • Install the libraries that will be required by rvm, Ruby, and Git:
sudo apt-get install build-essential git zlib1g-dev libssl-dev libreadline-gplv2-dev imagemagick libxml2-dev libxslt1-dev openssl zlib1g libyaml-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison libpq-dev libpq5 libeditline-dev

    sudo apt-get install libreadline6 libreadline6-dev
  • Install Git (on Linux rather than from Source)
  • Install rvm
  • Install Ruby
rvm install 1.9.3
rvm use 1.9.3 --default

3.4. Deploying the application

Our example, the Sentiment API, is located on GitHub. Try cloning the repository:

git clone

You can review the code and tutorial on creating and deploying this app here and here. Note the changes — we’re using only v1, as authentication will go through the gateway.

Now you can deploy the app by issuing bundle install.

Now you can start the thin server: thin start.

To access the API directly (without any security or access control) access: your-public-ip:3000/v1/words/awesome.json You can find your public IP in the AWS EC2 Dashboard > Instances in the details window of your instance.

AWS Details and public IP

3.4.1. Optional

If you want to assign a custom domain to your Amazon instance, you’ll have to do one thing: Add an A record to the DNS record of your domain, mapping the domain to the public IP address.

Your domain provider should either give you some way to set the A record (the IPv4 address), or it will give you a way to edit the nameservers of your domain. If they don’t allow you to set the A record directly find a DNS management service, register your domain as a zone there, and the service will give you the nameservers to enter in the admin panel of your domain provider. You can then add the A record for the domain. Some possible DNS management services include ZoneEdit (basic, free) or Amazon route 53.

At this point, your API is open to the world. This is good and bad—​it’s great that you’re sharing, but bad that without rate limits a few apps could kill the resources of your server and you would have no insight into who is using your API and how it’s being used. The solution is to add API management.

3.5. Enabling API management with 3scale

Rather than reinventing the wheel and implement rate limits, access controls, and analytics from scratch, you can leverage the 3scale API Management Platform. Sign up for a 3scale account if you haven’t already, activate it, and log in through the links provided. The first time you log in, some sample data will be created for you so you’ll have an API key to use later. You can go through the wizard to get an idea of the system’s functionality (optional). Then start with the implementation.

To get some instant results, we’ll start with the API gateway in the staging environment which can be used while in development. Then we’ll configure an NGINX gateway that can scale up for full production deployments. Here’s some documentation on the configuration of the API gateway, as well as more advanced configuration options.

Once you’ve signed in to your 3scale account, go to Dashboard > API > Select the service (API) > Integration > edit integration settings and then choose APIcast Self-managed.

Proxy Integration
Proxy Integration2

Set the address of of your API backend. This has to be the public IP address unless the custom domain has been set, including http protocol and port 3000. Now you can save the changes to the API gateway in the staging environment to test your API by hitting the staging endpoint.

Where XXX is specific to your 3scale account and USER_KEY is the authentication key of one of the sample applications created when you first logged into your 3scale account. (If you missed that step just create a developer account and an application within that account.)

Try it without app credentials; next with incorrect credentials; and then once authenticated, within and over any rate limits you have defined. Once it’s working to your satisfaction you can download the config files for NGINX.


Whenever you have errors, check whether you can access the API directly: your-public-dns:3000/v1/words/awesome.json. If that is not available, you need to check whether the AWS instance is running and whether the Thin server is running on the instance.

3.6. Install and deploy APIcast (your API gateway)

Finally, to deploy install and deploy APIcast, follow the steps in the APIcast 2.0 self-managed tutorial for 'local' deploy.

You’re almost finished! The last step is to start the NGINX gateway and put some traffic through it. If it’s not running yet (remember the Thin server has to be started first), go to your EC2 instance terminal (the one you were connecting through ssh before) and start it now.

The last step will be verifying that the traffic goes through with a proper authorization. To do that, access:


where APP_ID and APP_KEY are key and ID of the application you want to access through the API call.

Once everything is confirmed as working correctly, you’ll want to block public access to the API backend on port 3000, which bypasses any access controls.

Chapter 4. How To Deploy A Full-stack API Solution With Fuse, 3scale, And OpenShift

This tutorial describes how to get a full-stack API solution (API design, development, hosting, access control, monetization, etc.) using Red Hat JBoss xPaaS for OpenShift and 3scale API Management Platform - Cloud.

The tutorial is based on a collaboration between Red Hat and 3scale to provide a full-stack API solution. This solution includes design, development, and hosting of your API on the Red Hat JBoss xPaaS for OpenShift, combined with the 3scale API Management Platform for full control, visibility, and monetization features.

The API itself can be deployed on Red Hat JBoss xPaaS for OpenShift, which can be hosted in the cloud as well as on premise (that’s the Red Hat part). The API management (the 3scale part) can be hosted on Amazon Web Services (AWS), using 3scale APIcast or OpenShift. This gives a wide range of different configuration options for maximum deployment flexibility.

The diagram below summarizes the main elements of this joint solution. It shows the whole integration chain including enterprise backend systems, middleware, API management, and API customers.

Red Hat and 3scale joint API solution

For specific support questions, please contact support.

This tutorial shows three different deployment scenarios step by step:

  1. Scenario 1 – A Fuse on OpenShift application containing the API. The API is managed by 3scale with the API gateway hosted on Amazon Web Services (AWS) using the 3scale AMI.
  2. Scenario 2 – A Fuse on OpenShift application containing the API. The API is managed by 3scale with the API gateway hosted on APIcast (3scale’s cloud hosted API gateway).
  3. Scenario 3 – A Fuse on OpenShift application containing the API. The API is managed by 3scale with the API gateway hosted on OpenShift

This tutorial is split into four parts:

The diagram below shows the roles the various parts play in this configuration.

3scale on Red Hat

4.1. Part 1: Fuse on OpenShift setup

You will create a Fuse on OpenShift application that contains the API to be managed. You will use the REST quickstart that is included with Fuse 6.1. This requires a medium or large gear, as using the small gear will result in memory errors and/or horrible performance.

4.1.1. Step 1

Sign in to your OpenShift online account. Sign up for an OpenShift online account if you don’t already have one.

Red Hat Openshift

4.1.2. Step 2

Click the "add application" button after signing in.

Application button

4.1.3. Step 3

Under xPaaS, select the Fuse type for the application.

Select Fuse type

4.1.4. Step 4

Now configure the application. Enter the subdomain you’d like your application to show up under, such as "restapitest". This will give a full URL of the form "" – in the example below "". Change the gear size to medium or large, which is required for the Fuse cartridge. Now click on "create application".

Fuse app configuration

4.1.5. Step 5

Click "create application".

Create application

4.1.6. Step 6

Browse the application hawtio console and sign in.

Hawtio console

4.1.7. Step 7

After signing in, click on the "runtime" tab and the container, and add the REST API example.


4.1.8. Step 8

Click on the "add a profile" button.

Add profile

4.1.9. Step 9

Scroll down to examples/quickstarts and click the "REST" checkbox, then "add". The REST profile should show up on the container associated profile page.

REST checkbox

4.1.10. Step 10

Click on the runtime/APIs tab to verify the REST API profile.

Verify REST profile

4.1.11. Step 11

Verify the REST API is working. Browse to customer 123, which will return the ID and name in XML format.


4.2. Part 2: Configure 3scale API Management

To protect the API that you just created in Part 1 using 3scale API Management, you first must conduct the according configuration, which is then later deployed according to one of the three scenarios presented.

Once you have your API set up on OpenShift, you can start setting it up on 3scale to provide the management layer for access control and usage monitoring.

4.2.1. Step 1

Log in to your 3scale account. You can sign up for a 3scale account at if you don’t already have one. When you log in to your account for the first time, follow the wizard to learn the basics about integrating your API with 3scale.

4.2.2. Step 2

In API > Integration, you can enter the public URL for the Fuse application on OpenShift that you just created, e.g. "" and click on Test. This will test your setup against the 3scale API Gateway in the staging environment. The staging API gateway allows you to test your 3scale setup before deploying your proxy configuration to AWS.

3scale staging

4.2.3. Step 3

The next step is to set up the API methods that you want to monitor and rate limit. To do that go to API > Definition and click on 'New method'.

Define your API on 3scale

For more details on creating methods, visit our API definition tutorial.

4.2.4. Step 4

Once you have all of the methods that you want to monitor and control set up under the application plan, you’ll need to map these to actual HTTP methods on endpoints of your API. Go back to the integration page and expand the "mapping rules" section.

Add mapping rule

Create mapping rules for each of the methods you created under the application plan.

Mapping rules

Once you have done that, your mapping rules will look something like this:

Mapping rules complete

For more details on mapping rules, visit our tutorial about mapping rules.

4.2.5. Step 5

Once you’ve clicked "update and test" to save and test your configuration, you are ready to download the set of configuration files that will allow you to configure your API gateway on AWS. For the API gateway, you should use a high-performance, open-source proxy called nginx. You will find the necessary configuration files for nginx on the same integration page by scrolling down to the "production" section.

Download Lua config files

The next section will now take you through various hosting scenarios.

4.3. Part 3: Integration of your API services

There are different ways in which you can integrate your API services in 3scale. Choose the one that best fits your needs:

4.4. Part 4: Testing the API and API Management

Testing the correct functioning of the API and the API Management is independent from the chosen scenario. You can use your favorite REST client and run the following commands.

4.4.1. Step 1

Retrieve the customer instance with id 123.
Retrieve customer

4.4.2. Step 2

Create a customer.
Create customer

4.4.3. Step 3

Update the customer instance with id 123.
Update customer

4.4.4. Step 4

Delete the customer instance with id 123.
Delete customer

4.4.5. Step 5

Check the API Management analytics of your API.

If you now log back in to your 3scale account and go to Monitoring > Usage, you can see the various hits of the API endpoints represented as graphs.

API analytics

This is just one element of API Management that brings you full visibility and control over your API. Other features include:

  1. Access control
  2. Usage policies and rate limits
  3. Reporting
  4. API documentation and developer portals
  5. Monetization and billing

For more details about the specific API Management features and their benefits, please refer to the 3scale API Management Platform product description.

For more details about the specific Red Hat JBoss Fuse product features and their benefits, please refer to the JBOSS FUSE Overview.

For more details about running Red Hat JBoss Fuse on OpenShift, please refer to the Getting Started with JBoss Fuse on OpenShift.

Legal Notice

Copyright © 2018 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.