Introduction to NGINX Service Mesh
This document provides an overview of the various options you can configure when deploying NGINX Service Mesh. We strongly recommended that you review all of the available options discussed in this document before deploying NGINX Service Mesh.
If you need to manage your config after deploying, you can use the NGINX Service Mesh REST API.
Refer to the API Usage Guide for more information.
By default, the NGINX sidecar emits logs at the
To set the desired log level, use the
--nginx-error-log-level flag when deploying NGINX Service Mesh:
nginx-meshctl deploy ... --nginx-error-log-level debug
All of the NGINX error log levels are supported, in the order of most to least verbose:
If you need to modify the log level after you’ve deployed NGINX Service Mesh, you can do so by using the REST API.
By default, the NGINX sidecar uses the
least_time load balancing method.
To set the desired load balancing method, use the
--nginx-lb-method flag when deploying the
NGINX Service Mesh:
nginx-meshctl deploy ... --nginx-lb-method "random two least_conn"
An annotation can also be used for per-Service load balancing.
The supported methods (used for both
stream blocks) are:
least_time last_byte inflight
random two least_conn
random two least_time
random two least_time=last_byte
random two least_timeare treated as “time to first byte” methods.
streamblocks with either of these methods are given the
first_bytemethod parameter, and
httpblocks are given the
Monitoring and Tracing
NGINX Service Mesh uses Prometheus and Zipkin for monitoring and tracing, respectively. When you deploy using the default configuration, a new Prometheus server and a new Zipkin server will be created automatically.
The default addresses used for these resources are:
If using Jaeger instead of Zipkin, the default address is
The Jaeger UI is available on port
By default, NGINX Service Mesh deploys with tracing enabled for all Services.
Refer to Monitoring and Tracing for more information about custom monitoring and tracing options.
NGINX Service Mesh works by injecting a sidecar proxy into Kubernetes resources. You can choose to inject the sidecar proxy into the YAML or JSON definitions for your Kubernetes resources in the following ways:
Automatic injection is the default option. This means that any time a user creates a Kubernetes Pod resource, NGINX Service Mesh automatically injects the sidecar proxy into the Pod.
Automatic injection applies to all namespaces in your Kubernetes cluster. The list of namespaces that you want to use automatic injection for can be updated by using either the NGINX Service Mesh CLI or the REST API. See the Sidecar Proxy Injection topic for more information.
NGINX Plus Dashboard for Sidecar Proxies
You can view the NGINX Plus Dashboard for all NGINX Service Mesh sidecar proxies.
To access the dashboard for any Pod with an enabled proxy, use
kubectl to port-forward to port
kubectl port-forward -n <namespace> <pod name> 8886
Then, you can view the dashboard at
NGINX Service Mesh supports the use of the annotations listed in the tables below.
Each of the annotations listed below are described in more detail in the relevant sections of the NGINX Service Mesh documentation.
If an annotation is not specified, then the global defaults will be used.
These annotations should be added to the PodSpec of a Deployment, StatefulSet, etc., before injecting the sidecar proxy.
- When you need to update an annotation, be sure to edit the Deployment, StatefulSet, etc.; if you edit a Pod, then those edits will be overwritten if the Pod restarts.
- In the case of a standalone Pod, you should edit the Pod definition, then restart the Pod to load the new config.
|config.nsm.nginx.com/ignore-incoming-ports||list of port strings||"”|
|config.nsm.nginx.com/ignore-outgoing-ports||list of port strings||"”|
NGINX Service Mesh supports HTTP and GRPC at the L7 protocol layer. Sidecars can proxy these protocols explicitly. When HTTP and GRPC protocols are configured, a wider range of traffic shaping and traffic control features are available.
NGINX Service Mesh provides TCP transport support for Services that employ other L7 protocols. Workloads are not limited to communicating via HTTP and GRPC alone. These workloads may not be able to use some of the advanced L7 functionality.
Protocols will be identified by the Service’s port config,
If the port config is named, the name will be used to identify the protocol. If the name contains a dash it will be split using the dash as a delimiter and the first portion used, for example, ‘http-example’ will set protocol ‘http’.
If the port config sets a well-known port (
.spec.ports.port), this value will be used to determine protocol, for example, 80 will set protocol ‘http’.
If none of these rules are satisfied the protocol will default to TCP.
- HTTP - name ‘http’, port 80
- GRPC - name ‘grpc’
- TCP - name ‘tcp’
NGINX Service Mesh uses SPIRE – the SPIFFE Runtime Environment – to manage certificates for secure communication between proxies.
Refer to Secure Mesh Traffic using mTLS for more information about configuration options.
NGINX Service Mesh uses Prometheus for metrics and Grafana for visualizations. Both are included in the installation by default.
Refer to the Traffic Metrics topic for more information.
NGINX Service Mesh supports the SMI spec, which allows for a variety of functionality within the mesh, from traffic shaping to access control.
Refer to the SMI GitHub repo to find out more about the SMI spec and how to configure it.
Refer to the Traffic Policies topic for examples of how you can use the SMI spec in NGINX Service Mesh.