Chapter 2. Production Tips

This document refers to the downloadable Nginx configuration files. As such, most of these tips will not apply to the latest version of APIcast.

If you are running our NGINX-based API gateway in a production environment, we’ve collected these best practice for commonly asked questions.

2.1. Load balancing with the 3scale API gateway

The 3scale API gateway is based on the high performance NGINX gateway. A single instance is able to handle high enough API traffic volumes to meet most customers' needs. However, we recommend that production environments have multiple instances of the gateway in parallel. This will avoid having a single point of failure on the API gateway layer, and it will also provision extra capacity to handle potential traffic spikes.

The 3scale API gateway is designed to make it really easy to set up a load-balanced environment. It is completely stateless, reaching to the 3scale backend service to perform all authorization tasks.

If you’re currently operating with a single API gateway and you’re looking to set up multiple instances in parallel, all you need to do is:

  • Deploy as many instances as you want following the instructions here
  • Download your NGINX configuration files from 3scale.
  • Use the same set of files for all your gateways.
  • Make sure that the server_name is the same for all of them (this will be the public domain of your API, which will also be the domain that resolves to your load balancer in front of the gateways).

2.2. Correctly configure DNS resolution in NGINX

This is useful to know if you are using DNS resolution for load balancing on your API backend. A typical example of this is if you’re using AWS Elastic Load Balancing, which will return multiple different IP addresses when clients perform a DNS resolution.

By default, NGINX resolves the domain names of your backend servers only when it is started. It caches the IP and uses that when proxying incoming API requests. This will be a problem in the scenario discussed above for two reasons:

  • It will send all the traffic to a single IP, effectively disabling the DNS load balancing.
  • The cached IP might not exist anymore because your backend might have scaled down automatically removing some IP addresses from the pool.

There is an easy fix for this: forcing NGINX to resolve the domain of the backend servers at runtime. This solution requires using the resolver directive to specify a DNS server:

 resolver 8.8.8.8;

That line uses Google’s Public DNS servers, but you can configure any other DNS servers, including your own in case you want to resolve private domain names.

The resolver directive has many useful options, which you can learn about in the official documentation.

2.3. Bypassing the authorization step in case of network failure

The 3scale Service Management API is the service that responds to authorization requests sent by the API gateway. The availability of this service is our top priority, and it has a very good track record of uptime. Very rarely, there may be external circumstances that can cause your API gateway to be unable to reach the 3scale Service Management API (such as a problem in the network or in a corporate firewall). The default behavior of the API gateway in case the authorization request fails is to deny the incoming API call in order to prevent a potential security breach. However, this behavior can be customized to fit your requirements.

For example, you can deny incoming API calls from all users except those that come from a whitelist of mission critical applications. You can implement this behavior in your configuration by changing the authrep function in your Lua file to match the one in this code snippet. You should also create a whitelist.lua file with the list of app_id whose calls should be allowed through. This file should be placed in the same directory as the other NGINX configuration files.

2.3.1. Status updates

We notify of any problems in our service as soon as they happen. In order to get timely status updates, please follow @3scalestatus on Twitter.