Deploy with NGINX Plus Ingress Controller for Kubernetes

This topic describes how to install and use the NGINX Plus Ingress Controller with NGINX Service Mesh

Overview

You can deploy NGINX Ingress Controller for Kubernetes with NGINX Service Mesh to control both ingress and egress traffic.

Important:
There are two versions of NGINX Ingress Controller for Kubernetes: NGINX Open Source and NGINX Plus. To deploy NGINX Ingress Controller with NGINX Service Mesh, you must use the NGINX Plus version. Visit the NGINX Ingress Controller product page for more information.

Supported Versions

The following version are supported:

The documentation for the latest stable release of NGINX Ingress Controller is available at docs.nginx.com/nginx-ingress-controller. For version specific documentation, deployment configs, and configuration examples, select the tag corresponding to your desired version in GitHub.

Secure Communication Between NGINX Plus Ingress Controller and NGINX Service Mesh

The NGINX Plus Ingress Controller can participate in the mTLS cert exchange with services in the mesh without being injected with the sidecar proxy. The Spire server - the certificate authority of the mesh - issues certs and keys for NGINX Plus Ingress Controller and pushes them to the Spire agents running on each node in the cluster. NGINX Plus Ingress Controller fetches these certs and keys from the Spire agent via a unix socket and uses them to communicate with services in the mesh.

The NGINX Plus Ingress Controller Kubernetes Deployment and DaemonSet manifests include the configuration changes shown below, which are required to use this feature. You can download the manifests here:

  • Deployment: nginx-ingress-controller/nginx-plus-ingress.yaml

  • DaemonSet: nginx-ingress-controller/nginx-plus-ingress-daemonset.yaml

  • The Spire agent socket is added as a volume to the NGINX Plus Ingress Controller Pod spec:

    volumes:
    - hostPath:
      path: /run/spire/sockets
      type: DirectoryOrCreate
    name: spire-agent-socket
    
  • The socket is mounted to the NGINX Plus Ingress Controller container:

    volumeMounts:
    - mountPath: /run/spire/sockets
      name: spire-agent-socket
    
  • The address of the Spire agent /run/spire/sockets/agent.sock is provided to the NGINX Plus Ingress Controller using the spire-agent-address command line argument.

    Note:
    This feature is only available with NGINX Plus. You must use the nginx-plus command line argument, otherwise the NGINX Plus Ingress Controller will fail to start.

    args:
      - -nginx-plus
      - -spire-agent-address=/run/spire/sockets/agent.sock
      ...
    
  • The annotation nsm.nginx.com/enable-ingress is set to true in the NGINX Plus Ingress Controller Pod spec in order to prevent automatic injection of the sidecar proxy.

    annotations:
      nsm.nginx.com/enable-ingress: "true"
      ...
    

    If you would like to enable egress traffic, refer to the enabling egress section of this guide.

Cert Rotation with NGINX Plus Ingress Controller

The ttl of the SVID certificates issued by Spire is set to 1hr by default, but can be configured when deploying the mesh, please see the documentation for nginx-meshctl. Please note that when using NGINX Plus Ingress Controller with mTLS enabled it is best practice to keep the ttl at 1 hour or greater.

Install NGINX Plus Ingress Controller with mTLS enabled

Note:

All communication between NGINX Plus Ingress Controller and the upstream Services occurs over mTLS, using the certificates and keys generated by the Spire server. Therefore, NGINX Plus Ingress Controller can only route traffic to Services in the mesh that have an mtls-mode of permissive or strict. In cases where you need to route traffic to both mTLS and non-mTLS Services, you may need another Ingress Controller that does not participate in the mTLS fabric.

Refer to the NGINX Ingress Controller’s Running Multiple Ingress Controllers guide for instructions on how to configure multiple Ingress Controllers.

To configure the NGINX Plus Ingress Controller to use mTLS, take the steps below.

Important:
Before continuing, check the NGINX Plus Ingress Controller supported versions section and make sure you are working off the correct release tag for all NGINX Plus Ingress Controller instructions.
  1. Follow the installation instructions to install NGINX Service Mesh on your Kubernetes cluster. You can either deploy NGINX Service Mesh with the default value for mTLS mode, which is permissive, or set it to strict.

    Important:
    Before deploying NGINX Plus Ingress Controller, verify that all NGINX Service Mesh Pods – especially the Spire agent – are up and running. NGINX Ingress Controller will try to fetch certs from the Spire agent on startup. If it cannot reach the Spire agent, startup will fail and NGINX Plus Ingress Controller will go into CrashLoopBackoff state. The state will resolve once NGINX Plus Ingress Controller connects to the Spire agent.

  2. Build or Pull the NGINX Plus Ingress Controller image:

  3. Set up Kubernetes Resources for NGINX Plus Ingress Controller using Kubernetes manifests:

    Note:
    Installation with Helm is not supported by NGINX Service Mesh.

  4. Create the NGINX Plus Ingress Controller as a Deployment or DaemonSet in Kubernetes using the example files.

    Note:
    The provided manifests configure NGINX Plus Ingress Controller for ingress traffic only. If you would like to enable egress traffic, refer to the enabling egress section of this guide.
    Important:
    Be sure to replace the nginx-plus-ingress:version image used in the example file with the image you built in Step 1.
    • Apply the file:

      • For Deployment:

        kubectl apply -f nginx-plus-ingress.yaml
        
      • For DaemonSet:

        kubectl apply -f nginx-plus-ingress-daemonset.yaml
        
  5. Run the following command to ensure that the Ingress Controller is running:

    kubectl get pods --namespace=nginx-ingress
    
  6. Refer to the NGINX Ingress Controller docs to Get Access to the Ingress Controller.

With mTLS enabled, you can use Kubernetes Ingress, VirtualServer, and VirtualServerRoutes resources to configure load balancing for HTTP and gRPC applications. TCP load balancing via TransportServer resources is not supported.

Note:
The NGINX Plus Ingress Controller’s custom resource TransportServer and the SMI Spec’s custom resource TrafficSplit share the same Kubernetes short name ts. To avoid conflicts, use the full names transportserver(s) and trafficsplit(s) when managing these resources with kubectl.

To learn how to expose your applications using NGINX Plus Ingress Controller, refer to the Expose an Application with NGINX Plus Ingress Controller tutorial.

Enabling Egress

You can configure NGINX Plus Ingress Controller to act as the egress endpoint of the mesh, enabling your meshed services to communicate securely with external, non-meshed services.

To enable egress make the following changes to the NGINX Plus Ingress Controller Pod spec before deploying:

  • Add the annotation nsm.nginx.com/enable-egress: "true" to the NGINX Plus Ingress Controller Pod spec to enable egress traffic.

    This annotation prevents automatic injection of the sidecar proxy and configures the NGINX Plus Ingress Controller Pod as the egress endpoint of the mesh.

    Note:
    Only one egress endpoint is supported.

  • Add the command-line argument -enable-internal-routes to the container args in the NGINX Plus Ingress Controller Pod spec.

    This will create a virtual server block in NGINX Plus Ingress Controller that terminates tls connections using the Spiffe certs fetched from the Spire agent.

    Important:
    This command-line argument must be used with the -nginx-plus and spire-agent-address command-line arguments.

Allow Pods to route egress traffic through NGINX Plus Ingress Controller

If egress is enabled you can configure Pods to route all egress traffic - requests to non-meshed services - through NGINX Plus Ingress Controller. This feature can be enabled by adding the following annotation to the Pod spec of an application Pod:

config.nsm.nginx.com/default-egress-allowed: "true"

This annotation can be removed or changed after deployment and the egress behavior of the Pod will be updated accordingly.

Creating internal routes for non-meshed services

Internal routes represent a route from NGINX Plus Ingress Controller to a non-meshed service. This route is called “internal” because it is only accessible from a Pod in the mesh and is not accessible from the public internet.

Caution:
If you deploy NGINX Plus Ingress Controller without mTLS enabled, the internal routes could be accessible from the public internet. We do not recommend using the egress feature with a plaintext deployment of NGINX Plus Ingress Controller.

To create an internal route, create an Ingress resource using the information of your non-meshed service and add the following annotation:

nsm.nginx.com/internal-route: "true"

If your non-meshed service is external to Kubernetes please follow the ExternalName services example.

Please see this topic for a tutorial on creating internal routes for non-meshed services.

Enabling Ingress and Egress Traffic

There are a couple ways to enable both ingress and egress traffic using the NGINX Plus Ingress Controller. You can either allow both ingress and egress traffic through the same NGINX Plus Ingress Controller, or deploy two NGINX Plus Ingress Controllers: one for handling ingress traffic only and the other for handling egress traffic.

For the single deployment option follow the installation instructions and the instructions for enabling egress. If you would like to configure two Ingress Controllers to keep ingress and egress traffic separate you can leverage Ingress Classes.

Plaintext configuration

Deploy NGINX Service Mesh with mtls-mode set to off and follow the instructions to deploy NGINX Plus Ingress Controller.

Add the enable-ingress and/or the enable-egress annotation shown below to the NGINX Plus Ingress Controller Pod spec:

nsm.nginx.com/enable-ingress: "true"
nsm.nginx.com/enable-egress: "true"
Caution:
All communication between NGINX Plus Ingress Controller and the services in the mesh will be over plaintext! We do not recommend using the egress feature with a plaintext deployment of NGINX Plus Ingress Controller, it is possible that internal routes could be accessible from the public internet. We highly recommend installing NGINX Plus Ingress Controller with mTLS enabled.

OpenTracing Integration

To enable traces to span from NGINX Plus Ingress Controller through the backend services in the Mesh, you’ll first need to build the NGINX Plus Ingress Controller image with the OpenTracing module. Refer to the NGINX Ingress Controller guide to using OpenTracing for more information.

NGINX Service Mesh natively supports Zipkin, Jaeger, and DataDog; refer to the Monitoring and Tracing topic for more information.

If you are using a tracing backend deployed by the Mesh, use the CLI tool to find the address of the tracing server and the sample rate.

nginx-meshctl config
{
...
  "tracing": {
    "backend": "zipkin",
    "backendAddress": "zipkin.nginx-mesh.svc.cluster.local:9411",
    "isEnabled": true,
    "sampleRate": .01
  },
...
}

You will need to provide these values in the opentracing-tracer-config field of the NGINX Plus Ingress Controller ConfigMap.

Below is an example of the config for Zipkin:

  opentracing-tracer-config: |
     {
       "service_name": "nginx-ingress",
       "collector_host": "zipkin.nginx-mesh.svc.cluster.local",
       "collector_port": 9411,
       "sample_rate": .01
     }     

Add the annotation shown below to your Ingress resources. Doing so ensures that the span context propagates to the upstream requests and the operation name displays as “nginx-ingress”.

    nginx.org/location-snippets: |
     opentracing_propagate_context;
     opentracing_operation_name "nginx-ingress";     

NGINX Plus Ingress Controller Metrics

To enable metrics collection for the NGINX Plus Ingress Controller, take the following steps:

  1. Run the NGINX Plus Ingress Controller with both the -enable-prometheus-metrics and -enable-latency-metrics command line arguments. The NGINX Plus Ingress Controller exposes NGINX Plus metrics and latency metrics in Prometheus format via the /metrics path on port 9113. This port is customizable via the -prometheus-metrics-listen-port command-line argument; consult the Command Line Arguments section of the NGINX Plus Ingress Controller docs for more information on available command line arguments.

  2. Define the port that Prometheus should scrape metrics from by adding an annotation to the NGINX Plus Ingress Controller Pod spec:

    prometheus.io/port: "<prometheus-metrics-listen-port>"
    
  3. Add the resource name as a label to the NGINX Plus Ingress Controller Pod spec:

    • For Deployment:

      nsm.nginx.com/deployment: <name of NGINX Plus Ingress Controller Deployment>
      
    • For DaemonSet:

      nsm.nginx.com/daemonset: <name of NGINX Plus Ingress Controller DaemonSet>
      

    This allows metrics scraped from NGINX Plus Ingress Controller Pods to be associated with the resource that created the Pods.

View the metrics in Prometheus

The NGINX Service Mesh uses the Pod’s container name setting to identify the NGINX Plus Ingress Controller metrics that should be consumed by the Prometheus server. The Prometheus job targets all Pods that have the container name nginx-plus-ingress.

If you are using an existing Prometheus deployment, add the nginx-plus-ingress scrape config to your Prometheus configuration and consult Use an Existing Prometheus Deployment for installation instructions.

Available metrics

For a list of the NGINX Plus Ingress Controller metrics, consult the Available Metrics section of the NGINX Plus Ingress Controller docs.

Note:
The NGINX Plus metrics exported by the NGINX Plus Ingress Controller are renamed from nginx_ingress_controller_<metric-name> to nginxplus_<metric-name> to be consistent with the metrics exported by NGINX Service Mesh sidecars. For example, nginx_ingress_controller_upstream_server_response_latency_ms_count is renamed to nginxplus_upstream_server_response_latency_ms_count. The Ingress Controller specific metrics, such as nginx_ingress_controller_nginx_reloads_total, are not renamed.

For more information on metrics, a list of Prometheus labels, and examples of querying and filtering, see the Traffic Metrics doc.

To view the metrics, use port-forwarding:

kubectl port-forward -n nginx-mesh svc/prometheus 9090

Monitor your application in Grafana

NGINX Service Mesh ships with a Grafana service and a default dashboard which you can use to monitor your application. To view the Grafana dashboard, first port-forward the service:

kubectl port-forward -n nginx-mesh svc/grafana 3000:3000

Then you can navigate your browser to localhost:3000 to view the dashboard. Here is a view of the “NGINX Mesh Top” dashboard shipped with Grafana in the mesh: