Deploy with NGINX Plus Ingress Controller

This topic describes how to install and use the NGINX Plus Ingress Controller with NGINX Service Mesh

Overview

You can deploy NGINX Ingress Controller for Kubernetes with NGINX Service Mesh to control both ingress and egress traffic.

Important:
There are two versions of NGINX Ingress Controller for Kubernetes: NGINX Open Source and NGINX Plus. To deploy NGINX Ingress Controller with NGINX Service Mesh, you must use the NGINX Plus version. Visit the NGINX Ingress Controller product page for more information.

Supported Versions

The supported NGINX Plus Ingress Controller versions for each release are listed in the technical specifications doc.

The documentation for the latest stable release of NGINX Ingress Controller is available at docs.nginx.com/nginx-ingress-controller. For version specific documentation, deployment configs, and configuration examples, select the tag corresponding to your desired version in GitHub.

Secure Communication Between NGINX Plus Ingress Controller and NGINX Service Mesh

The NGINX Plus Ingress Controller can participate in the mTLS cert exchange with services in the mesh without being injected with the sidecar proxy. The SPIRE server - the certificate authority of the mesh - issues certs and keys for NGINX Plus Ingress Controller and pushes them to the SPIRE agents running on each node in the cluster. NGINX Plus Ingress Controller fetches these certs and keys from the SPIRE agent via a unix socket and uses them to communicate with services in the mesh.

Cert Rotation with NGINX Plus Ingress Controller

The ttl of the SVID certificates issued by SPIRE is set to 1hr by default, but can be configured when deploying the mesh, please see the documentation for nginx-meshctl. Please note that when using NGINX Plus Ingress Controller with mTLS enabled it is best practice to keep the ttl at 1 hour or greater.

Install NGINX Plus Ingress Controller with mTLS enabled

To configure NGINX Plus Ingress Controller to communicate with mesh workloads over mTLS you need to make a few modifications to the Ingress Controller’s Pod spec. This section describes each modification that is required, but if you’d like to jump to installation, go to the Install with Manifests or Install with Helm sections.

  1. Mount the SPIRE agent socket

    The SPIRE agent socket needs to be mounted to the Ingress Controller Pod so the Ingress Controller can fetch its certificates and keys from the SPIRE agent. This allows the Ingress Controller to authenticate with workloads in the mesh. For more information on how SPIRE distributes certificates see the SPIRE section in the architecture doc.

    • Kubernetes

      To mount the SPIRE agent socket in Kubernetes, add the following hostPath as a volume to the Ingress Controller’s Pod spec:

      volumes:
      - hostPath:
          path: /run/spire/sockets
          type: DirectoryOrCreate
        name: spire-agent-socket
      

      and mount the socket to the Ingress Controller’s container spec:

      volumeMounts:
      - mountPath: /run/spire/sockets
        name: spire-agent-socket
      
    • OpenShift

      To mount the SPIRE agent socket in OpenShift, add the following csi driver to the Ingress Controller’s Pod spec:

      volumes:
      - csi:
        driver: wlapi-mounter.spire.nginx.com
        readOnly: true
      name: spire-agent-socket
      

      and mount the socket to the Ingress Controller’s container spec:

      volumeMounts:
      - mountPath: /run/spire/sockets
        name: spire-agent-socket
      

      For more information as to why a CSI Driver is needed for loading the agent socket in OpenShift, see Introduction in the OpenShift Considerations doc.

  2. Add command line arguments

    The following arguments must be added to the Ingress Controller’s container args:

    args:
      - -nginx-plus
      - -spire-agent-address=/run/spire/sockets/agent.sock
      ...
    - 
    
    • The nginx-plus argument is required since this feature is only available with NGINX Plus. If you do not specify this flag, the Ingress Controller will fail to start.
    • The spire-agent-address passes the address of the SPIRE agent /run/spire/sockets/agent.sock to the Ingress Controller.
  3. Add NGINX Service Mesh annotation

    The following annotation must be added to the Ingress Controller’s Pod spec:

    annotations:
      nsm.nginx.com/enable-ingress: "true"
      ...
    

    This annotation prevents NGINX Service Mesh from automatically injecting the sidecar into the Ingress Controller Pod.

  4. Add SPIFFE label

    labels:
      spiffe.io/spiffeid: "true"
      ...
    

    This label tells SPIRE to generate a certificate for the Ingress Controller Pod(s).

Note:

All communication between NGINX Plus Ingress Controller and the upstream Services occurs over mTLS, using the certificates and keys generated by the SPIRE server. Therefore, NGINX Plus Ingress Controller can only route traffic to Services in the mesh that have an mtls-mode of permissive or strict. In cases where you need to route traffic to both mTLS and non-mTLS Services, you may need another Ingress Controller that does not participate in the mTLS fabric.

Refer to the NGINX Ingress Controller’s Running Multiple Ingress Controllers guide for instructions on how to configure multiple Ingress Controllers.

If you would like to enable egress traffic, refer to the Enable Egress section of this guide.

Install with Manifests

Before installing NGINX Plus Ingress Controller, you must install NGINX Service Mesh with an mTLS mode of permissive, or strict. NGINX Plus Ingress Controller will try to fetch certs from the SPIRE agent on startup. If it cannot reach the SPIRE agent, startup will fail, and NGINX Plus Ingress Controller will go into CrashLoopBackoff state. The state will resolve once NGINX Plus Ingress Controller connects to the SPIRE agent. For instructions on how to install NGINX Service Mesh, see the Installation guide.

Note:
Before continuing, check the NGINX Plus Ingress Controller supported versions section and make sure you are working off the correct release tag for all NGINX Plus Ingress Controller instructions.
  1. Build or Pull the NGINX Plus Ingress Controller image:
  2. Set up Kubernetes Resources for NGINX Plus Ingress Controller using Kubernetes manifests:
  3. Create the NGINX Plus Ingress Controller as a Deployment or DaemonSet in Kubernetes using one of the following example manifests:

Install with Helm

Before installing NGINX Plus Ingress Controller, you must install NGINX Service Mesh with an mTLS mode of permissive, or strict. NGINX Plus Ingress Controller will try to fetch certs from the SPIRE agent on startup. If it cannot reach the SPIRE agent, startup will fail, and NGINX Plus Ingress Controller will go into CrashLoopBackoff state. The state will resolve once NGINX Plus Ingress Controller connects to the SPIRE agent. For instructions on how to install NGINX Service Mesh, see the Installation guide.

Note:
NGINX Plus Ingress Controller v2.2+ is required to deploy via Helm and integrate with NGINX Service Mesh.

Follow the instructions to install the NGINX Plus version of the Ingress Controller with Helm. Set the nginxServiceMesh.enable parameter to true.

Note:
This will configure NGINX Plus Ingress Controller to route ingress traffic to NGINX Service Mesh workloads. If you would like to enable egress traffic, refer to the Enable Egress section of this guide.

The values-nsm.yaml file contains all the configuration parameters that are relevant for integration with NGINX Service Mesh. You can use this file if you are installing NGINX Plus Ingress Controller via chart sources.

Expose your applications

With mTLS enabled, you can use Kubernetes Ingress, VirtualServer, and VirtualServerRoutes resources to configure load balancing for HTTP and gRPC applications. TCP load balancing via TransportServer resources is not supported.

Note:
The NGINX Plus Ingress Controller’s custom resource TransportServer and the SMI Spec’s custom resource TrafficSplit share the same Kubernetes short name ts. To avoid conflicts, use the full names transportserver(s) and trafficsplit(s) when managing these resources with kubectl.

To learn how to expose your applications using NGINX Plus Ingress Controller, refer to the Expose an Application with NGINX Plus Ingress Controller tutorial.

Enable Egress

You can configure NGINX Plus Ingress Controller to act as the egress endpoint of the mesh, enabling your meshed services to communicate securely with external, non-meshed services.

Note:
Multiple endpoints for a single egress deployment are supported, but multiple egress deployments are not supported.

Enable with Manifests

If you are installing NGINX Plus Ingress Controller with manifests follow the Install with Manifests instructions and make the following changes to the NGINX Plus Ingress Controller Pod spec:

  • Add the following annotation to the NGINX Plus Ingress Controller Pod spec:

    nsm.nginx.com/enable-egress: "true"
    

    This annotation prevents automatic injection of the sidecar proxy and configures the NGINX Plus Ingress Controller as the egress endpoint of the mesh.

  • Add the following command-line argument to the container args in the NGINX Plus Ingress Controller Pod spec:

    -enable-internal-routes
    

    This will create a virtual server block in NGINX Plus Ingress Controller that terminates TLS connections using the SPIFFE certs fetched from the SPIRE agent.

    Important:
    This command-line argument must be used with the -nginx-plus and spire-agent-address command-line arguments.

Enable with Helm

Note:
NGINX Plus Ingress Controller v2.2+ is required to deploy via Helm and integrate with NGINX Service Mesh.

If you are installing NGINX Plus Ingress Controller with Helm, follow the Install with Helm instructions and set nginxServiceMesh.enableEgress to true.

Allow Pods to route egress traffic through NGINX Plus Ingress Controller

If egress is enabled you can configure Pods to route all egress traffic - requests to non-meshed services - through NGINX Plus Ingress Controller. This feature can be enabled by adding the following annotation to the Pod spec of an application Pod:

config.nsm.nginx.com/default-egress-allowed: "true"

This annotation can be removed or changed after deployment and the egress behavior of the Pod will be updated accordingly.

Create internal routes for non-meshed services

Internal routes represent a route from NGINX Plus Ingress Controller to a non-meshed service. This route is called “internal” because it is only accessible from a Pod in the mesh and is not accessible from the public internet.

Caution:
If you deploy NGINX Plus Ingress Controller without mTLS enabled, the internal routes could be accessible from the public internet. We do not recommend using the egress feature with a plaintext deployment of NGINX Plus Ingress Controller.

To create an internal route, create an Ingress resource using the information of your non-meshed service and add the following annotation:

nsm.nginx.com/internal-route: "true"

If your non-meshed service is external to Kubernetes please follow the ExternalName services example.

Note:
The nsm.nginx.com/internal-route: "true" Ingress annotation is still required for routing to external endpoints.

Please see this topic for a tutorial on creating internal routes for non-meshed services.

Enable Ingress and Egress Traffic

There are a couple ways to enable both ingress and egress traffic using the NGINX Plus Ingress Controller. You can either allow both ingress and egress traffic through the same NGINX Plus Ingress Controller, or deploy two NGINX Plus Ingress Controllers: one for handling ingress traffic only and the other for handling egress traffic.

For the single deployment option, follow the installation instructions and the instructions to Enable Egress. If you would like to configure two Ingress Controllers to keep ingress and egress traffic separate you can leverage Ingress Classes.

Enable UDP Traffic

By default, NGINX Plus Ingress Controller only routes TCP traffic. You can configure it to route UDP traffic by making the following changes to the NGINX Plus Ingress Controller before deploying:

  • Enable GlobalConfiguration resources for NGINX Plus Ingress Controller by following the setup defined in the GlobalConfiguration Resource documentation.

    This allows you to define global configuration parameters for the NGINX Ingress Controller, and create a UDP listener to route ingress UDP traffic to your backend applications.

Important:
mTLS does not affect UDP communication, as mTLS in NGINX Service Mesh applies only to TCP traffic at this time.

Create a GlobalConfiguration Resource

To allow UDP traffic to be routed to your Kubernetes applications, create a UDP listener in NGINX Plus Ingress Controller. This can be done via a GlobalConfiguration Resource.

To create a GlobalConfiguration resource, see the NGINX Plus Ingress Controller documentation to create a listener with protocol UDP.

Ingress UDP Traffic

You can pass and load balance UDP traffic by using a TransportServer resource. This will link the UDP listener defined in the Create a GlobalConfiguration Resource step with an upstream associated with your designated backend UDP application.

To crate a TransportServer resource, follow the steps outlined in the TransportServer NGINX Plus Ingress Controller guide and link the UDP listener with the name and port of your backend service.

To learn how to expose a UDP application using NGINX Plus Ingress Controller, see the Expose a UDP Application with NGINX Plus Ingress Controller tutorial.

Plaintext configuration

Deploy NGINX Service Mesh with mtls-mode set to off and follow the instructions to deploy NGINX Plus Ingress Controller.

Add the enable-ingress and/or the enable-egress annotation shown below to the NGINX Plus Ingress Controller Pod spec:

nsm.nginx.com/enable-ingress: "true"
nsm.nginx.com/enable-egress: "true"
Caution:
All communication between NGINX Plus Ingress Controller and the services in the mesh will be over plaintext! We do not recommend using the egress feature with a plaintext deployment of NGINX Plus Ingress Controller, it is possible that internal routes could be accessible from the public internet. We highly recommend installing NGINX Plus Ingress Controller with mTLS enabled.

OpenTracing Integration

To enable traces to span from NGINX Plus Ingress Controller through the backend services in the Mesh, you’ll first need to build the NGINX Plus Ingress Controller image with the OpenTracing module. Refer to the NGINX Ingress Controller guide to using OpenTracing for more information.

NGINX Service Mesh natively supports Zipkin, Jaeger, and DataDog; refer to the Monitoring and Tracing topic for more information.

If you are using a tracing backend deployed by the Mesh, use the CLI tool to find the address of the tracing server and the sample rate.

nginx-meshctl config
{
...
  "tracing": {
    "backend": "zipkin",
    "backendAddress": "zipkin.nginx-mesh.svc.cluster.local:9411",
    "isEnabled": true,
    "sampleRate": .01
  },
...
}

You will need to provide these values in the opentracing-tracer-config field of the NGINX Plus Ingress Controller ConfigMap.

Below is an example of the config for Zipkin:

  opentracing-tracer-config: |
     {
       "service_name": "nginx-ingress",
       "collector_host": "zipkin.nginx-mesh.svc.cluster.local",
       "collector_port": 9411,
       "sample_rate": .01
     }     

Add the annotation shown below to your Ingress resources. Doing so ensures that the span context propagates to the upstream requests and the operation name displays as “nginx-ingress”.

Note:
The example below uses the snippets annotation. Starting with NGINX Plus Ingress Controller version 2.1.0, snippets are disabled by default. To use snippets, set the enable-snippets command-line argument on the NGINX Plus Ingress Controller Deployment or Daemonset.
    nginx.org/location-snippets: |
     opentracing_propagate_context;
     opentracing_operation_name "nginx-ingress";     

NGINX Plus Ingress Controller Metrics

To enable metrics collection for the NGINX Plus Ingress Controller, take the following steps:

  1. Run the NGINX Plus Ingress Controller with both the -enable-prometheus-metrics and -enable-latency-metrics command line arguments. The NGINX Plus Ingress Controller exposes NGINX Plus metrics and latency metrics in Prometheus format via the /metrics path on port 9113. This port is customizable via the -prometheus-metrics-listen-port command-line argument; consult the Command Line Arguments section of the NGINX Plus Ingress Controller docs for more information on available command line arguments.

  2. Add the following Prometheus annotations NGINX Plus Ingress Controller Pod spec:

    prometheus.io/scrape: "true"
    prometheus.io/port: "<prometheus-metrics-listen-port>"
    
  3. Add the resource name as a label to the NGINX Plus Ingress Controller Pod spec:

    • For Deployment:

      nsm.nginx.com/deployment: <name of NGINX Plus Ingress Controller Deployment>
      
    • For DaemonSet:

      nsm.nginx.com/daemonset: <name of NGINX Plus Ingress Controller DaemonSet>
      

    This allows metrics scraped from NGINX Plus Ingress Controller Pods to be associated with the resource that created the Pods.

View the metrics in Prometheus

The NGINX Service Mesh uses the Pod’s container name setting to identify the NGINX Plus Ingress Controller metrics that should be consumed by the Prometheus server. The Prometheus job targets all Pods that have the container name nginx-plus-ingress.

If you are using an existing Prometheus deployment, add the nginx-plus-ingress scrape config to your Prometheus configuration and consult Use an Existing Prometheus Deployment for installation instructions.

Available metrics

For a list of the NGINX Plus Ingress Controller metrics, consult the Available Metrics section of the NGINX Plus Ingress Controller docs.

Note:
The NGINX Plus metrics exported by the NGINX Plus Ingress Controller are renamed from nginx_ingress_controller_<metric-name> to nginxplus_<metric-name> to be consistent with the metrics exported by NGINX Service Mesh sidecars. For example, nginx_ingress_controller_upstream_server_response_latency_ms_count is renamed to nginxplus_upstream_server_response_latency_ms_count. The Ingress Controller specific metrics, such as nginx_ingress_controller_nginx_reloads_total, are not renamed.

For more information on metrics, a list of Prometheus labels, and examples of querying and filtering, see the Prometheus Metrics doc.

To view the metrics, use port-forwarding:

kubectl port-forward -n nginx-mesh svc/prometheus 9090

Monitor your application in Grafana

NGINX Service Mesh ships with a Grafana service and a default dashboard which you can use to monitor your application. To view the Grafana dashboard, first port-forward the service:

kubectl port-forward -n nginx-mesh svc/grafana 3000:3000

Then you can navigate your browser to localhost:3000 to view the dashboard. Here is a view of the “NGINX Mesh Top” dashboard shipped with Grafana in the mesh: