Learn about NGINX Service Mesh features and deployment options.
This document provides an overview of the various options you can configure when deploying NGINX Service Mesh. We strongly recommended that you review all of the available options discussed in this document before deploying NGINX Service Mesh.
To manage your configuration after deployment, you can use the NGINX Service Mesh API.
Refer to the API Usage Guide for more information.
For Helm users, the
nginx-meshctl deploycommand-line options map directly to Helm values. Alongside this guide, check out the Helm Configuration Options.
For information on the mTLS configuration options–including how to use a custom Upstream Certificate Authority–see how to Secure Mesh Traffic using mTLS.
By default, traffic flow is allowed for all services in the mesh.
To change this to a closed global policy and only allow traffic to flow between services that have access control policies defined, use the
--access-control-mode flag when deploying NGINX Service Mesh:
nginx-meshctl deploy ... --access-control-mode deny
If you need to modify the global access control mode after you’ve deployed NGINX Service Mesh, you can do so by using the API.
Client Max Body Size
By default, NGINX allows a client request body to be up to 1m in size.
To change this value to a different size, use the
--client-max-body-size flag when deploying NGINX Service Mesh:
nginx-meshctl deploy ... --client-max-body-size 5m
Setting the value to “0” allows for an unlimited request body size.
To configure the client max body size for a specific Pod, add the
config.nsm.nginx.com/client-max-body-size: <size> annotation to the PodTemplateSpec of your Deployment, StatefulSet, and so on.
If you need to modify the global client max body size after you’ve deployed NGINX Service Mesh, you can do so by using the API.
NGINX core module documentation for
By default, the NGINX sidecar emits logs at the
To set the desired log level, use the
--nginx-error-log-level flag when deploying NGINX Service Mesh:
nginx-meshctl deploy ... --nginx-error-log-level debug
All of the NGINX error log levels are supported, in the order of most to least verbose:
By default, the NGINX sidecar emits logs using the
default format. The supported formats are
To set the NGINX sidecar logging format, use the
--nginx-log-format flag when deploying NGINX Service Mesh:
nginx-meshctl deploy ... --nginx-log-format json
If you need to modify the log level or log format after you’ve deployed NGINX Service Mesh, you can do so by using the API.
By default, the NGINX sidecar uses the
least_time load balancing method.
To set the desired load balancing method, use the
--nginx-lb-method flag when deploying the
NGINX Service Mesh:
nginx-meshctl deploy ... --nginx-lb-method "random two least_conn"
To configure the load balancing method for a Service, add the
config.nsm.nginx.com/lb-method: <method> annotation to the
metadata.annotations field of your Service.
The supported methods (used for both
stream blocks) are:
least_time last_byte inflight
random two least_conn
random two least_time
random two least_time=last_byte
random two least_timeare treated as “time to first byte” methods.
streamblocks with either of these methods are given the
first_bytemethod parameter, and
httpblocks are given the
For more information on how these load balancing methods work, see HTTP Load Balancing and TCP and UDP Load Balancing.
Monitoring and Tracing
NGINX Service Mesh can connect to your Prometheus and tracing deployments. Refer to Monitoring and Tracing for more information.
NGINX Service Mesh works by injecting a sidecar proxy into Kubernetes resources. You can choose to inject the sidecar proxy into the YAML or JSON definitions for your Kubernetes resources in the following ways:
Supported Labels and Annotations
NGINX Service Mesh supports the use of the labels and annotations listed in the tables below. If not specified, then the global defaults will be used.
|config.nsm.nginx.com/ignore-incoming-ports||list of port strings||""|
|config.nsm.nginx.com/ignore-outgoing-ports||list of port strings||""|
The Pod labels and annotations should be added to the PodTemplateSpec of a Deployment, StatefulSet, and so on, before injecting the sidecar proxy.
For example, the following
nginx Deployment is configured with an
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx annotations: config.nsm.nginx.com/mtls-mode: strict spec: containers: - name: nginx image: nginx ports: - containerPort: 80
- When you need to update a label or annotation, be sure to edit the Deployment, StatefulSet, and so on; if you edit a Pod, then those edits will be overwritten if the Pod restarts.
- In the case of a standalone Pod, you should edit the Pod definition, then restart the Pod to load the new config.
Service annotations are added to the metadata field of the Service.
For example, the following Service is configured to use the
random load balancing method:
apiVersion: v1 kind: Service metadata: name: my-service annotations: config.nsm.nginx.com/lb-method: random spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 80
NGINX Service Mesh supports HTTP and GRPC at the L7 protocol layer. Sidecars can proxy these protocols explicitly. When HTTP and GRPC protocols are configured, a wider range of traffic shaping and traffic control features are available.
NGINX Service Mesh provides TCP transport support for Services that employ other L7 protocols. Workloads are not limited to communicating via HTTP and GRPC alone. These workloads may not be able to use some of the advanced L7 functionality.
NGINX Service Mesh provides UDP transport applications that need one-way communication of datagrams. For bidirectional communication we recommend using TCP.
Protocols will be identified by the Service’s port config,
NGINX Service Mesh uses identification rules both on the incoming and outgoing side of application deployments to identify the kind of traffic that is being sent, as well as what traffic is intended for a particular application.
In a service spec, if the port config is named, the name will be used to identify the protocol. If the name contains a dash it will be split using the dash as a delimiter and the first portion used, for example, ‘http-example’ will set protocol ‘http’.
If the port config sets a well-known port (
.spec.ports.port), this value will be used to determine protocol, for example, 80 will set protocol ‘http’.
If none of these rules are satisfied the protocol will default to TCP.
For an example of how this is used, see Deploy an Example App in the tutorials section.
For a particular deployment or pod resource, the
.spec.containers.ports.containerPort) field of the Pod spec is used to determine what traffic should be allowed to access your application. This is particularly important when using strict mode for denying unwanted traffic.
For an example of how this is used, see Deploy an Example App in the tutorials section.
- HTTP - name ‘http’, port 80
- GRPC - name ‘grpc’
- TCP - name ‘tcp’
- UDP - name ‘udp’
NGINX Service Mesh uses SPIRE – the SPIFFE Runtime Environment – to manage certificates for secure communication between proxies.
Refer to Secure Mesh Traffic using mTLS for more information about configuration options.
NGINX Service Mesh can export metrics to Prometheus, and provides a custom dashboard for visualizing metrics in Grafana.
Refer to the Traffic Metrics topic for more information.
NGINX Service Mesh supports the SMI spec, which allows for a variety of functionality within the mesh, from traffic shaping to access control.
Refer to the SMI GitHub repo to find out more about the SMI spec and how to configure it.
Refer to the Traffic Policies topic for examples of how you can use the SMI spec in NGINX Service Mesh.
By default, NGINX Service Mesh deploys with the
kubernetes configuration. If deploying in an Openshift environment, use the
--environment flag to specify an alternative environment:
nginx-meshctl deploy ... --environment "openshift"
See Considerations for when you’re deploying in an OpenShift cluster.
Avoid configuring traffic policies such as TrafficSplits, RateLimits, and CircuitBreakers for headless services. These policies will not work as expected because NGINX Service Mesh has no way to tie each pod IP address to its headless service.
When using NGINX Service Mesh, it is necessary to declare the port in a headless service in order for it to be matched. Without this declaration, traffic will not be routed correctly.
UDP traffic proxying is turned off by default. You can activate it at deploy time using the
--enable-udp flag. Linux kernel 4.18 or greater is required.
NGINX Service Mesh automatically detects and adjusts the
eth0 interface to support the 32 bytes of space required for PROXY Protocol V2.
See the UDP and eBPF architecture section for more information.
NGINX Service Mesh does not detect changes made to the MTU in the pod at runtime.
If adding a CNI changes the MTU of the
eth0 interface of running pods, you should re-roll the affected pods to ensure those changes take place.