Introduction to NGINX Service Mesh

Overview

This document provides an overview of the various options you can configure when deploying NGINX Service Mesh. We strongly recommended that you review all of the available options discussed in this document before deploying NGINX Service Mesh.

Tip:

If you need to manage your config after deploying, you can use the NGINX Service Mesh REST API.

Refer to the API Usage Guide for more information.

Logging

By default, the NGINX sidecar emits logs at the warn level.

To set the desired log level, use the --nginx-error-log-level flag when deploying NGINX Service Mesh:

nginx-meshctl deploy ... --nginx-error-log-level debug

All of the NGINX error log levels are supported, in the order of most to least verbose:

  • debug,
  • info,
  • notice,
  • warn,
  • error,
  • crit,
  • alert,
  • emerg

If you need to modify the log level after you’ve deployed NGINX Service Mesh, you can do so by using the REST API.

Monitoring and Tracing

NGINX Service Mesh uses Prometheus and Zipkin for monitoring and tracing, respectively. When you deploy using the default configuration, a new Prometheus server and a new Zipkin server will be created automatically.

By default, NGINX Service Mesh deploys with tracing enabled for all Services.

Refer to Monitoring and Tracing for more information about custom monitoring and tracing options.

Sidecar Proxy

NGINX Service Mesh works by injecting a sidecar proxy into Kubernetes resources. You can choose to inject the sidecar proxy into the YAML or JSON definitions for your Kubernetes resources in the following ways:

Automatic injection is the default option. This means that any time a user creates a Kubernetes Pod resource, NGINX Service Mesh automatically injects the sidecar proxy into the Pod.

Important:
Automatic injection applies to all namespaces in your Kubernetes cluster. The list of namespaces that you want to use automatic injection for can be updated by using either the NGINX Service Mesh CLI or the REST API. See the Sidecar Proxy Injection topic for more information.

NGINX Plus Dashboard for Sidecar Proxies

You can view the NGINX Plus Dashboard for all NGINX Service Mesh sidecar proxies.

To access the dashboard for any Pod with an enabled proxy, use kubectl to port-forward to port 8886.

kubectl port-forward -n <namespace> <pod name> 8886

Then, you can view the dashboard at http://localhost:8886/dashboard.html.

Supported Annotations

NGINX Service Mesh supports the use of the annotations listed in the table below.

Annotations should be added to the PodSpec of a Deployment, StatefulSet, etc., before injecting the sidecar proxy.

  • When you need to update an annotation, be sure to edit the Deployment, StatefulSet, etc.; if you edit a Pod, then those edits will be overwritten if the Pod restarts.
  • In the case of a standalone Pod, you should edit the Pod definition, then restart the Pod to load the new config.
Note:

Each of the annotations listed below are described in more detail in the relevant sections of the NGINX Service Mesh documentation.

If an annotation is not specified, then the global defaults will be used.

Annotation Values Default
injector.nsm.nginx.com/auto-inject true, false true
config.nsm.nginx.com/mtls-mode off, permissive, strict permissive
config.nsm.nginx.com/tracing-enabled true, false true
config.nsm.nginx.com/ignore-incoming-ports list of port strings "”
config.nsm.nginx.com/ignore-outgoing-ports list of port strings "”
config.nsm.nginx.com/default-egress-allowed true, false false
nsm.nginx.com/enable-ingress true, false false
nsm.nginx.com/enable-egress true, false false

Supported Protocols

NGINX Service Mesh supports HTTP and GRPC at the L7 protocol layer. Sidecars can proxy these protocols explicitly. When HTTP and GRPC protocols are configured, a wider range of traffic shaping and traffic control features are available.

NGINX Service Mesh provides TCP transport support for Services that employ other L7 protocols. Workloads are not limited to communicating via HTTP and GRPC alone. These workloads may not be able to use some of the advanced L7 functionality.

Protocols will be identified by the Service’s port config, .spec.ports.

Identification Rules

If the port config is named, the name will be used to identify the protocol. If the name contains a dash it will be split using the dash as a delimiter and the first portion used, for example, ‘http-example’ will set protocol ‘http’.

If the port config sets a well-known port (.spec.ports[].port), this value will be used to determine protocol, for example, 80 will set protocol ‘http’.

If none of these rules are satisfied the protocol will default to TCP.

Protocols

  • HTTP - name ‘http’, port 80
  • GRPC - name ‘grpc’
  • TCP - name ‘tcp’

Unavailable protocols

  • UDP
  • SCTP

Traffic Encryption

NGINX Service Mesh uses SPIRE – the SPIFFE Runtime Environment – to manage certificates for secure communication between proxies.

Refer to Secure Mesh Traffic using mTLS for more information about configuration options.

Traffic Metrics

NGINX Service Mesh uses Prometheus for metrics and Grafana for visualizations. Both are included in the installation by default.

Refer to the Traffic Metrics topic for more information.

Traffic Policies

NGINX Service Mesh supports the SMI spec, which allows for a variety of functionality within the mesh, from traffic shaping to access control.

Refer to the SMI GitHub repo to find out more about the SMI spec and how to configure it.

Refer to the Traffic Policies topic for examples of how you can use the SMI spec in NGINX Service Mesh.