End of Sale Notice:

F5 NGINX is announcing the End of Sale (EoS) for NGINX Management Suite API Connectivity Manager Module, effective January 1, 2024.

F5 maintains generous lifecycle policies that allow customers to continue support and receive product updates. Existing API Connectivity Manager Module customers can continue to use the product past the EoS date. License renewals are not available after September 30, 2024.

See our End of Sale announcement for more details.

Rate Limiting

Learn how to use the NGINX Management Suite API Connectivity Manager Rate Limiting policy to protect backend servers. The Rate Limiting policy lets you limit connections and the rate of requests based on request URI, client IP address, or authenticated clients.

Overview

In API Connectivity Manager, you can apply policies to an API Gateway to further enhance their configuration to meet your requirements.

Policies added at the proxy level are applied to all routes within that proxy.

For an overview of the different policy types and available policies, refer to the consult the Learn about Policies topic.


About the Policy

The Rate Limit policy can be used to throttle the number of requests in a time period that enter an application. You can specify multiple rate limit stipulations with a single policy based on the Request URI, Client IP address or the Authenticated Client ID. The policy can also specify the type of traffic shaping required to allow burst traffic or two-stage rate limiting.

Intended Audience

This guide is meant for NGINX Management Suite Administrators who can modify or create policies on an API Gateway Proxy.


Before You Begin

Complete the following prerequisites before proceeding with this guide:

  • API Connectivity Manager is installed, licensed, and running.
  • You have one or more Environments with an API Gateway.
  • You have published one or more API Gateways.

Policy Settings

Field Type Possible Values Description Required Default Value
returnCode int In range 400-599 The return code that used when the total number of requests have been exceeded. Yes 429
grpcStatusCode int In range 400-599 The return code that used when the total number of requests have been exceeded. No 429
limits.rate string Example:
10r/s
The total number of requests allowed over a given amount of time. Yes 10r/s
limits.rateLimitBy string uri, consumer, client.ip The value on which to apply the rate limiting on. Yes client.ip
limits.zoneSize string Example:
10M
The size of the shared memory buffer for the proxy. Yes 10M
throttle.delay int Example:
5
The delay parameter defines the point at which, within the burst size, excessive requests are throttled. No N/A
throttle.noDelay boolean true/false Decides if the request should be processed immediately or stored in buffer. No N/A
throttle.burst int Example:
10
Total number of requests that can be handled in a burst before rate limiting is exceeded. No N/A

Applying the Policy

You can apply this policy using the web interface or the REST API. The policy uses x-correlation-id as the default HTTP header name, or you can provide a custom header value.


Send a POST request to add the Rate limit policy to the API Proxy.

Method Endpoint
POST /services/workspaces/<SERVICE_WORKSPACE_NAME>/proxies
JSON request
{
  "policies": {
     "rate-limit": [
        {
           "systemMetadata": {
              "appliedOn": "inbound",
              "context": "proxy"
           },
           "action": {
              "limits": [
                 {
                    "rate": "10r/s",
                    "rateLimitBy": "client.ip",
                    "zoneSize": "10M"
                 }
              ]
           }
        }
     ]
  }
}

This JSON example defines a Request Correlation ID policy, which specifies that an HTTP header called x-correlation-id should be used when passing the correlation ID.

To add a Request Correlation ID policy using the web interface:

  1. In the ACM user interface, go to Services > {your workspace}, where “your workspace” is the workspace that contains the API Proxy.
  2. Select Edit Proxy from the Actions menu for the desired API Proxy.
  3. On the Policies tab, select Add Policy from the Actions menu for Rate Limit.
  4. Multiple Rate limit stipulations can be added for a policy.
  5. Configure the associated Key, Limit, Unit Zone Size and Zone size unit for each stipulation.
  6. Optionally you can customize the type of rate limiting that is applied to the policy. Choose from one of the 3 following options
    1. Buffer excess requests: will allow bursts of requests to be stored in a buffer
    2. Buffer excess requests no delay: will allow bursts of requests to get processed immediately while there is space in the buffer
    3. Throttle excess requests: will enable Two-Stage rate limiting
  7. Set custom error return code conditions if rate limiting is exceeded.
  8. Select Add to apply the Rate Limit policy to the Proxy. Then select Save & Publish to deploy the configuration to the API Proxy.
  9. Select Add to apply the policy to the cluster.
  10. Select Save and Submit to deploy the configuration.

Common Use Cases

The following articles describe common use cases for rate limiting:

  1. Rate Limiting with NGINX and NGINX Plus
  2. Deploying NGINX as an API Gateway, Part 2: Protecting Backend Services