Learn about the traffic policies supported by NGINX Service Mesh and how to configure them.
This topic discusses the various traffic policies that are supported by NGINX Service Mesh. We support the SMI spec to allow for a variety of functionality within our mesh, from traffic shaping to access control. NGINX Service Mesh provides additional traffic policies to extend on the SMI spec. This topic provides examples of how you can use the SMI spec and NGINX custom resources with NGINX Service Mesh to apply policies and control your traffic.
Refer to the SMI GitHub repo to find out more about the SMI spec and how to configure it.
You can use the SMI routing spec to implement Canary, A/B testing, and other traffic routing setups.
NGINX Service Mesh is also compatible with Flagger and other SMI-compatible projects.
The Deployments using Traffic Splitting tutorial provides a walkthrough of using traffic splits in a deployment.
The NGINX Plus Ingress Controller’s custom resource TransportServer has the same Kubernetes short name (
ts) as the custom resource TrafficSplit. If you install the NGINX Plus Ingress Controller, use the full names
trafficsplit(s)when managing these resources with
You can use the SMI Traffic Access spec to define access to applications throughout your cluster. Keep in mind that you must use this spec in conjunction with SMI Specs to fully define access control in the mesh.
HTTPRouteGroup rules are picked on a
first match basis. A match is the first rule that satisfies all criteria
port) for a request. Matches should be defined in order
from most specific to least specific to ensure the
first match policy picks the best option.
This match policy works on a per TrafficTarget basis. If multiple TrafficTargets reference the same destination and same sources, rule ordering is not guaranteed. Ensure that a single TrafficTarget contains all appropriate rules for a destination and source.
The Services using Access Control tutorial provides a walkthrough of using access control between services.
You can configure rate limiting by creating a RateLimit resource.
A rate limit requires a destination and array of sources, as well as a rate limit spec that ties them together. The destination and array of sources takes a
namespace in order to bind to a selected resource. The rate limit spec has four custom fields:
name: The name of the rate limit.
rate: The rate to restrict traffic to. Example: “1r/s”, “30r/m”
burst: The number of requests to allow beyond a given rate.
delay: The number of requests after which to delay requests.
Rate limits only affect the traffic from services that are in the sources list. Services not included in this list are able to pass unlimited traffic to their destination(s).
Each Pod in the destination accepts the total rate defined in a rate limit policy. If a policy has a rate of 100 r/m, and the destination consists of 3 Pods, each Pod accepts 100 r/m.
If a single rate limit policy contains multiple sources, the rate divides evenly amongst them. For example, a policy defined with
destination: name: destService sources: - name: source1 - name: source2 rate: 100 r/m
would result in
destService accepting 50 requests per minute from
source1, and 50 requests per minute
source2, for a total rate of 100 requests per minute. If two separate policies are defined for the
same destination, then the rate is not divided amongst the sources.
You can download our Rate Limit example here: target-rate-limit.yaml .
Refer to the NGINX Documentation for more information on rate, burst, and delay.
You can enable circuit breaking by creating a CircuitBreaker resource.
A circuit breaker requires a destination and an associated spec. The destination takes a
namespace in order to bind to a selected resource.
kind: Serviceis supported.
The circuit breaker spec has three custom fields:
errors: The number of errors before the circuit trips.
timeoutSeconds: The window for errors to occur within before tripping the circuit. Also the amount of time to wait before closing the circuit.
fallback: The name and port of a Kubernetes Service to re-route traffic to after the circuit has been tripped.
fallback: name: "my-namespace/fallback-svc" port: 8080
If no namespace or port is specified, default values are
The destination and fallback services must be in the same namespace.
You can download our Circuit Breaker example here: circuit-breaker.yaml
Refer to the NGINX Documentation
for more information about the
backup parameters, which are used for circuit breaking.