Configuration Options for NGINX Service Mesh

Overview

This document provides an overview of the various options you can configure when deploying NGINX Service Mesh. We strongly recommended that you review all of the available options discussed in this document before deploying NGINX Service Mesh.

Tip:

If you need to manage your config after deploying, you can use the NGINX Service Mesh REST API.

Refer to the API Usage Guide for more information.

Access Control

By default, traffic flow is allowed for all services in the mesh.

To change this to a closed global policy and only allow traffic to flow between services that have access control policies defined, use the --access-control-mode flag when deploying NGINX Service Mesh:

nginx-meshctl deploy ... --access-control-mode deny

If you need to modify the global access control mode after you’ve deployed NGINX Service Mesh, you can do so by using the REST API.

Logging

By default, the NGINX sidecar emits logs at the warn level.

To set the desired log level, use the --nginx-error-log-level flag when deploying NGINX Service Mesh:

nginx-meshctl deploy ... --nginx-error-log-level debug

All of the NGINX error log levels are supported, in the order of most to least verbose:

  • debug,
  • info,
  • notice,
  • warn,
  • error,
  • crit,
  • alert,
  • emerg

By default, the NGINX sidecar emits logs using the default format. The supported formats are default and json.

To set the NGINX sidecar logging format, use the --nginx-log-format flag when deploying NGINX Service Mesh:

nginx-meshctl deploy ... --nginx-log-format json

If you need to modify the log level or log format after you’ve deployed NGINX Service Mesh, you can do so by using the REST API.

Load Balancing

By default, the NGINX sidecar uses the least_time load balancing method.

To set the desired load balancing method, use the --nginx-lb-method flag when deploying the NGINX Service Mesh:

nginx-meshctl deploy ... --nginx-lb-method "random two least_conn"

To configure the load balancing method for a Service, add the config.nsm.nginx.com/lb-method: <method> annotation to the metatdata.annotations field of your Service.

The supported methods (used for both http and stream blocks) are:

  • round_robin
  • least_conn
  • least_time
  • least_time last_byte
  • least_time last_byte inflight
  • random
  • random two
  • random two least_conn
  • random two least_time
  • random two least_time=last_byte
Note:
least_time and random two least_time are treated as “time to first byte” methods. stream blocks with either of these methods are given the first_byte method parameter, and http blocks are given the header parameter.

For more information on how these load balancing methods work, see HTTP Load Balancing and TCP Load Balancing.

Monitoring and Tracing

NGINX Service Mesh uses Prometheus and Jaeger for monitoring and tracing, respectively. When you deploy using the default configuration, a new Prometheus server and a new Jaeger server will be created automatically.

The default addresses used for these resources are:

  • Prometheus: prometheus.nginx-mesh.svc.cluster.local:9090
  • Jaeger: jaeger.nginx-mesh.svc.cluster.local:6831

The Jaeger UI is available on port 16686.

If using Zipkin instead of Jaeger, the default address is zipkin.nginx-mesh.svc.cluster.local:9411.

By default, NGINX Service Mesh deploys with tracing enabled for all Services.

Refer to Monitoring and Tracing for more information about custom monitoring and tracing options, including the use of DataDog as a tracer.

Sidecar Proxy

NGINX Service Mesh works by injecting a sidecar proxy into Kubernetes resources. You can choose to inject the sidecar proxy into the YAML or JSON definitions for your Kubernetes resources in the following ways:

Automatic injection is the default option. This means that any time a user creates a Kubernetes Pod resource, NGINX Service Mesh automatically injects the sidecar proxy into the Pod.

Important:
Automatic injection applies to all namespaces in your Kubernetes cluster. The list of namespaces that you want to use automatic injection for can be updated by using either the NGINX Service Mesh CLI or the REST API. See the Sidecar Proxy Injection topic for more information.

Supported Annotations

NGINX Service Mesh supports the use of the annotations listed in the tables below.

Note:

Each of the annotations listed below are described in more detail in the relevant sections of the NGINX Service Mesh documentation.

If an annotation is not specified, then the global defaults will be used.

PodSpec Annotations

Annotation Values Default
injector.nsm.nginx.com/auto-inject true, false true
config.nsm.nginx.com/mtls-mode off, permissive, strict permissive
config.nsm.nginx.com/tracing-enabled true, false true
config.nsm.nginx.com/ignore-incoming-ports list of port strings "”
config.nsm.nginx.com/ignore-outgoing-ports list of port strings "”
config.nsm.nginx.com/default-egress-allowed true, false false
nsm.nginx.com/enable-ingress true, false false
nsm.nginx.com/enable-egress true, false false

These annotations should be added to the PodSpec of a Deployment, StatefulSet, etc., before injecting the sidecar proxy. For example, the following nginx Deployment is configured with an mtls-mode of strict:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
      annotations:
        config.nsm.nginx.com/mtls-mode: strict
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
  • When you need to update an annotation, be sure to edit the Deployment, StatefulSet, etc.; if you edit a Pod, then those edits will be overwritten if the Pod restarts.
  • In the case of a standalone Pod, you should edit the Pod definition, then restart the Pod to load the new config.

Service Annotations

Annotation Values Default
config.nsm.nginx.com/lb-method least_conn, least_time, least_time
least_time last_byte,
least_time last_byte inflight,
round_robin, random, random two,
random two least_conn,
random two least_time,
random two least_time=last_byte

Service annotations are added to the metadata field of the Service. For example, the following Service is configured to use the random load balancing method:

apiVersion: v1
kind: Service
metadata:
  name: my-service
  annotations:
    config.nsm.nginx.com/lb-method: random
spec:
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Supported Protocols

NGINX Service Mesh supports HTTP and GRPC at the L7 protocol layer. Sidecars can proxy these protocols explicitly. When HTTP and GRPC protocols are configured, a wider range of traffic shaping and traffic control features are available.

NGINX Service Mesh provides TCP transport support for Services that employ other L7 protocols. Workloads are not limited to communicating via HTTP and GRPC alone. These workloads may not be able to use some of the advanced L7 functionality.

Protocols will be identified by the Service’s port config, .spec.ports.

Identification Rules

NGINX Service Mesh uses identification rules both on the incoming and outgoing side of application deployments to identify the kind of traffic that is being sent, as well as what traffic is intended for a particular application.

Outgoing

In a service spec, if the port config is named, the name will be used to identify the protocol. If the name contains a dash it will be split using the dash as a delimiter and the first portion used, for example, ‘http-example’ will set protocol ‘http’.

If the port config sets a well-known port (.spec.ports[].port), this value will be used to determine protocol, for example, 80 will set protocol ‘http’.

If none of these rules are satisfied the protocol will default to TCP.

For an example of how this is used, see Deploy an Example App in the tutorials section.

Incoming

For a particular deployment or pod resource, the containerPort (.spec.containers[].ports.containerPort) field of the Pod spec is used to determine what traffic should be allowed to access your application. This is particularly important when using strict mode for denying unwanted traffic.

For an example of how this is used, see Deploy an Example App in the tutorials section.

Protocols

  • HTTP - name ‘http’, port 80
  • GRPC - name ‘grpc’
  • TCP - name ‘tcp’

Unavailable protocols

  • UDP
  • SCTP

Traffic Encryption

NGINX Service Mesh uses SPIRE – the SPIFFE Runtime Environment – to manage certificates for secure communication between proxies.

Refer to Secure Mesh Traffic using mTLS for more information about configuration options.

Traffic Metrics

NGINX Service Mesh uses Prometheus for metrics and Grafana for visualizations. Both are included in the installation by default.

Refer to the Traffic Metrics topic for more information.

Traffic Policies

NGINX Service Mesh supports the SMI spec, which allows for a variety of functionality within the mesh, from traffic shaping to access control.

Refer to the SMI GitHub repo to find out more about the SMI spec and how to configure it.

Refer to the Traffic Policies topic for examples of how you can use the SMI spec in NGINX Service Mesh.