3scale API Management Batcher policy

Solution Verified - Updated -

Environment

  • Red Hat 3scale API Management Platform (3scale)
    • 2 (On Premise)
    • SaaS

Issue

  • How does the batcher policy work?

Resolution

3scale API Management Batcher policy

The primary function of this policy is to reduce traffic to the 3scale backend by using a local cache.

API Authorization and Reporting

When APIcast gateway receives a new request, APIcast will perform the following actions:

  • API Authorization: Verifies if the request has the right credentials and scope. If it does, the request is authorized to access the requested resource and hence, it will be proxied to the corresponding upstream (API backend).

  • API Reporting: Once APIcast has received the response code from the upstream, it can report that request's usage to the 3scale API Management analytics.

APIcast will need the 3scale backend-listener component for completing the above actions since the backed-listener pod is responsible for both API authorization & reporting.

When APIcast needs to authorize a request, it will make a call to one of the 3scale backend-listener authorization endpoints:

  /transactions/authorize.xml             # API Key / App_ID & App_Key Configured
  /transactions/oauth_authorize.xml       # OIDC configured  

When APIcast needs to report usage, it will make a call to one of the 3scale backend-listener reporting endpoints:

  /transactions.xml                       # report path

Request flow with Batcher Policy

This policy uses its own cache for authorization - which expires after the time set in auths_ttl - and reporting the API usage every n seconds - as configured in batch_report_seconds.

As a result, authorization responses are cached and API usage is batch reported rather than performing each of these actions on every request received, therefore the number of requests to backend-listener will be notably reduced.

Trade-offs when using the Batcher Policy

The usage limits and the current utilization are stored in the redis database connected to APIcast via backend-listener, and APIcast can only get the authorization status when calling the authorization endpoint on backend-listener. When the APIcast Batcher policy is enabled, due to the cache configured, APIcast will not send authorization and report requests to backend-listener until the cache has expired. During this period, clients sending requests to the APIs might go over the defined limits.

So, although rate limiting accuracy may be reduced, the throughput achieved by the APIcast gateway will increase due to the fewer authorization and report requests to the backend-listener component.

Use Cases

This policy is recommended in the following scenarios:

  1. High-load APIs where the throughput is more important than the accuracy of the rate limiting. The APIcast Batcher policy gives better results in terms of accuracy when the reporting frequency and authorization TTL are much less than the rate limiting period. For example, if the limits are per day and the reporting frequency and authorization TTL are configured to be several minutes.

  2. API products that receive much more traffic from a particular application in comparison with the rest. The efficacy of this policy will depend on the cache hit ratio. For use cases where the combination of services, applications and mapping rules is relatively low, caching and batching will be very effective and will increase the throughput of the system significantly.

  3. When API traffic volume is greater than the largest SKU which 3scale was tested for:

    • up to 100 million requests per day.
    • up to 1.1k requests per second (sustained).

Important: Having 3scale inbound traffic volume above those levels described in item 3 (above) puts the 3scale installation into an untested configuration, and therefore unsupported. Using the APIcast Batcher Policy is mandatory and will help to get the installation back under the tested and supported traffic volume.

Configuration

APIcast Batcher Policy allows to configure two parameters:

  • auths_ttl: TTL for cached auths in seconds.
  • batch_report_seconds: Duration (in seconds) for batching reports.

Also, from Red Hat 3scale API Management 2.15 version, the maximum cache size of the Batcher Policy(which default is 20m) can be increased if needed by setting the variable APICAST_POLICY_BATCHER_SHARED_MEMORY_SIZE.

The setting auths_ttl will reduce the number of requests done to the backend_listener (auth request) per APIcast gateway for every new authorization. The batch_report_seconds will reduce the number of reports sent to the 3scale backend.

It is recommended that the batch_report_seconds is set to a value higher than auths_ttl and it is recommended that auths_ttl is set to value less than the exp claim of the JWT token used for authorization when JWT are used.

Note: there are no recommended values for the exp claim of the JWT tokens (when used).

Further details can be checked in the official 3scale documentation.

Note: it is recommended to test the policy in an environment that replicates both the Production data and traffic profiles as closely as possible so that the results can be relied on for maximum impact. Testing is the responsibility of the user and Red Hat does not recommend any values as a default for this policy as the configuration is entirely dependent on the environment and traffic profiles where it is implemented.

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.

Comments