End of Sale Notice:
Commercial support for NGINX Service Mesh is available to customers who currently have active NGINX Microservices Bundle subscriptions. F5 NGINX announced the End of Sale (EoS) for the NGINX Microservices Bundles as of July 1, 2023.
See our End of Sale announcement for more details.
Configuration Options
Learn about F5 NGINX Service Mesh features and deployment options.
Overview
This document provides an overview of the various options you can configure when deploying F5 NGINX Service Mesh. We strongly recommended that you review all of the available options discussed in this document before deploying NGINX Service Mesh.
Tip:
To manage your configuration after deployment, you can use the NGINX Service Mesh API.
Refer to the API Usage Guide for more information.
Note:
For Helm users, thenginx-meshctl deploy
command-line options map directly to Helm values. Alongside this guide, check out the Helm Configuration Options.
Mutual TLS
For information on the mTLS configuration options–including how to use a custom Upstream Certificate Authority–see how to Secure Mesh Traffic using mTLS.
Access Control
By default, traffic flow is allowed for all services in the mesh.
To change this to a closed global policy and only allow traffic to flow between services that have access control policies defined, use the --access-control-mode
flag when deploying NGINX Service Mesh:
nginx-meshctl deploy ... --access-control-mode deny
If you need to modify the global access control mode after you’ve deployed NGINX Service Mesh, you can do so by using the API.
Client Max Body Size
By default, NGINX allows a client request body to be up to 1m in size.
To change this value to a different size, use the --client-max-body-size
flag when deploying NGINX Service Mesh:
nginx-meshctl deploy ... --client-max-body-size 5m
Setting the value to “0” allows for an unlimited request body size.
To configure the client max body size for a specific Pod, add the config.nsm.nginx.com/client-max-body-size: <size>
annotation to the PodTemplateSpec of your Deployment, StatefulSet, and so on.
If you need to modify the global client max body size after you’ve deployed NGINX Service Mesh, you can do so by using the API.
Logging
By default, the NGINX sidecar emits logs at the warn
level.
To set the desired log level, use the --nginx-error-log-level
flag when deploying NGINX Service Mesh:
nginx-meshctl deploy ... --nginx-error-log-level debug
All of the NGINX error log levels are supported, in the order of most to least verbose:
debug
,info
,notice
,warn
,error
,crit
,alert
,emerg
By default, the NGINX sidecar emits logs using the default
format. The supported formats are default
and json
.
To set the NGINX sidecar logging format, use the --nginx-log-format
flag when deploying NGINX Service Mesh:
nginx-meshctl deploy ... --nginx-log-format json
If you need to modify the log level or log format after you’ve deployed NGINX Service Mesh, you can do so by using the API.
Load Balancing
By default, the NGINX sidecar uses the least_time
load balancing method.
To set the desired load balancing method, use the --nginx-lb-method
flag when deploying the
NGINX Service Mesh:
nginx-meshctl deploy ... --nginx-lb-method "random two least_conn"
To configure the load balancing method for a Service, add the config.nsm.nginx.com/lb-method: <method>
annotation to the metadata.annotations
field of your Service.
The supported methods (used for both http
and stream
blocks) are:
round_robin
least_conn
least_time
least_time last_byte
least_time last_byte inflight
random
random two
random two least_conn
random two least_time
random two least_time=last_byte
Note:
least_time
andrandom two least_time
are treated as “time to first byte” methods.stream
blocks with either of these methods are given thefirst_byte
method parameter, andhttp
blocks are given theheader
parameter.
For more information on how these load balancing methods work, see HTTP Load Balancing and TCP and UDP Load Balancing.
Monitoring and Tracing
NGINX Service Mesh can connect to your Prometheus and tracing deployments. Refer to Monitoring and Tracing for more information.
Sidecar Proxy
NGINX Service Mesh works by injecting a sidecar proxy into Kubernetes resources. You can choose to inject the sidecar proxy into the YAML or JSON definitions for your Kubernetes resources in the following ways:
Supported Labels and Annotations
NGINX Service Mesh supports the use of the labels and annotations listed in the tables below. If not specified, then the global defaults will be used.
Namespace Labels
Label | Values |
---|---|
injector.nsm.nginx.com/auto-inject | enabled , disabled |
Pod Labels
Label | Values |
---|---|
injector.nsm.nginx.com/auto-inject | enabled , disabled |
nsm.nginx.com/enable-ingress | true , false |
nsm.nginx.com/enable-egress | true , false |
Pod Annotations
Annotation | Values | Default |
---|---|---|
config.nsm.nginx.com/mtls-mode | off , permissive , strict |
permissive |
config.nsm.nginx.com/client-max-body-size | 0 , 64k , 10m , … |
1m |
config.nsm.nginx.com/ignore-incoming-ports | list of port strings | "" |
config.nsm.nginx.com/ignore-outgoing-ports | list of port strings | "" |
config.nsm.nginx.com/default-egress-allowed | true , false |
false |
The Pod labels and annotations should be added to the PodTemplateSpec of a Deployment, StatefulSet, and so on, before injecting the sidecar proxy.
For example, the following nginx
Deployment is configured with an mtls-mode
of strict
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
config.nsm.nginx.com/mtls-mode: strict
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- When you need to update a label or annotation, be sure to edit the Deployment, StatefulSet, and so on; if you edit a Pod, then those edits will be overwritten if the Pod restarts.
- In the case of a standalone Pod, you should edit the Pod definition, then restart the Pod to load the new config.
Service Annotations
Annotation | Values | Default |
---|---|---|
config.nsm.nginx.com/lb-method | least_conn , least_time , |
least_time |
least_time last_byte , |
||
least_time last_byte inflight , |
||
round_robin , random , random two , |
||
random two least_conn , |
||
random two least_time , |
||
random two least_time=last_byte |
Service annotations are added to the metadata field of the Service.
For example, the following Service is configured to use the random
load balancing method:
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
config.nsm.nginx.com/lb-method: random
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 80
Supported Protocols
NGINX Service Mesh supports HTTP and GRPC at the L7 protocol layer. Sidecars can proxy these protocols explicitly. When HTTP and GRPC protocols are configured, a wider range of traffic shaping and traffic control features are available.
NGINX Service Mesh provides TCP transport support for Services that employ other L7 protocols. Workloads are not limited to communicating via HTTP and GRPC alone. These workloads may not be able to use some of the advanced L7 functionality.
NGINX Service Mesh provides UDP transport applications that need one-way communication of datagrams. For bidirectional communication we recommend using TCP.
Protocols will be identified by the Service’s port config, .spec.ports
.
Identification Rules
NGINX Service Mesh uses identification rules both on the incoming and outgoing side of application deployments to identify the kind of traffic that is being sent, as well as what traffic is intended for a particular application.
Outgoing
In a service spec, if the port config is named, the name will be used to identify the protocol. If the name contains a dash it will be split using the dash as a delimiter and the first portion used, for example, ‘http-example’ will set protocol ‘http’.
If the port config sets a well-known port (.spec.ports[].port
), this value will be used to determine protocol, for example, 80 will set protocol ‘http’.
If none of these rules are satisfied the protocol will default to TCP.
For an example of how this is used, see Deploy an Example App in the tutorials section.
Incoming
For a particular deployment or pod resource, the containerPort
(.spec.containers[].ports.containerPort
) field of the Pod spec is used to determine what traffic should be allowed to access your application. This is particularly important when using strict mode for denying unwanted traffic.
For an example of how this is used, see Deploy an Example App in the tutorials section.
Protocols
- HTTP - name ‘http’, port 80
- GRPC - name ‘grpc’
- TCP - name ’tcp’
- UDP - name ‘udp’
Unavailable protocols
- SCTP
Traffic Encryption
NGINX Service Mesh uses SPIRE – the SPIFFE Runtime Environment – to manage certificates for secure communication between proxies.
Refer to Secure Mesh Traffic using mTLS for more information about configuration options.
Traffic Metrics
NGINX Service Mesh can export metrics to Prometheus, and provides a custom dashboard for visualizing metrics in Grafana.
Refer to the Traffic Metrics topic for more information.
Traffic Policies
NGINX Service Mesh supports the SMI spec, which allows for a variety of functionality within the mesh, from traffic shaping to access control.
Refer to the SMI GitHub repo to find out more about the SMI spec and how to configure it.
Refer to the Traffic Policies topic for examples of how you can use the SMI spec in NGINX Service Mesh.
Environment
By default, NGINX Service Mesh deploys with the kubernetes
configuration. If deploying in an Openshift environment, use the --environment
flag to specify an alternative environment:
nginx-meshctl deploy ... --environment "openshift"
See Considerations for when you’re deploying in an OpenShift cluster.
Headless Services
Avoid configuring traffic policies such as TrafficSplits, RateLimits, and CircuitBreakers for headless services. These policies will not work as expected because NGINX Service Mesh has no way to tie each pod IP address to its headless service.
When using NGINX Service Mesh, it is necessary to declare the port in a headless service in order for it to be matched. Without this declaration, traffic will not be routed correctly.
UDP Proxying
UDP traffic proxying is turned off by default. You can activate it at deploy time using the --enable-udp
flag. Linux kernel 4.18 or greater is required.
NGINX Service Mesh automatically detects and adjusts the eth0
interface to support the 32 bytes of space required for PROXY Protocol V2.
See the UDP and eBPF architecture section for more information.
NGINX Service Mesh does not detect changes made to the MTU in the pod at runtime.
If adding a CNI changes the MTU of the eth0
interface of running pods, you should re-roll the affected pods to ensure those changes take place.