Configure a Secure Egress Route with NGINX Plus Ingress Controller

This topic provides a walkthrough of how to securely route egress traffic through NGINX Plus Ingress Controller for Kubernetes with NGINX Service Mesh.

Overview

Learn how to create internal routes in NGINX Plus Ingress Controller to securely route egress traffic to non-meshed services.

Note:
The NGINX Plus version of NGINX Ingress Controller is required for this tutorial.

Objectives

Follow this tutorial to deploy the NGINX Plus Ingress Controller with egress enabled, and securely route egress traffic from a meshed service to a non-meshed service.

Before You Begin

  1. Install kubectl.

  2. Download the example files:

Install NGINX Service Mesh

  1. Follow the installation instructions to install NGINX Service Mesh on your Kubernetes cluster.

    • When deploying the mesh set the mTLS mode to strict and add the legacy namespace as a disabled namespace.

    • Your deploy command should contain the following flags:

      nginx-meshctl deploy ... --mtls-mode=strict --disabled-namespaces=legacy
      
  2. Get the config of the mesh and verify that mtlsMode is strict and disabledNamespaces contains the legacy namespace:

    nginx-meshctl config 
    

Create an Application Outside of the Mesh

The target application is a basic NGINX server listening on port 80. It returns a “target version” value, which is v1.0.

  1. Create a namespace, legacy, that will not be managed by the mesh:

    kubectl create namespace legacy
    
  2. Create the target application in the legacy namespace:

    kubectl -n legacy apply -f target-v1.0.yaml
    
  3. Verify that the target application is running and the target pod is not injected with the sidecar proxy:

    kubectl -n legacy get pods,svc
    
    NAME                               READY   STATUS    RESTARTS   AGE
    pod/target-v1-0-5985d8544d-sgkxg   1/1     Running   0          12s
    
    NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
    service/target-v1-0   ClusterIP   10.0.0.0       <none>        80/TCP    11s
    

Send traffic to the target application

  1. Create the sender application in the default namespace:

    kubectl apply -f egress-driver.yaml
    
  2. Verify that the egress-driver pod is injected with the sidecar proxy.

    kubectl get pods -o wide
    
    NAME                             READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
    egress-driver-5587fbdf78-hm4w6   2/2     Running   0          5s     10.1.1.1     node-name   <none>           <none>
    

    The egress-driver Pod will automatically send requests to the target-v1-0.legacy Service. Once started, the script will delay for 10 seconds and then begin to send requests.

  3. Check the Pod logs to verify that the requests are being sent:

    kubectl logs -f -c egress-driver <EGRESS_DRIVER_POD>
    

    Expectation:

    You should see the egress-driver is not able to reach target. The script employs a verbose curl command that also displays connection and HTTP information. For example:

    *   Trying 10.16.14.126:80...
    * Connected to target-v1-0.legacy (10.16.14.126) port 80 (#0)
    > GET / HTTP/1.1
    > Host: target-v1-0.legacy
    > User-Agent: curl/7.72.0-DEV
    > Accept: */*
    > 
    * Received HTTP/0.9 when not allowed
    
    * Closing connection 0
    
  4. Use the top command to check traffic metrics:

    nginx-meshctl top deploy/egress-driver
    

    Expectation: No traffic metrics are populated!

    Cannot build traffic statistics.
    Error: no metrics populated - make sure traffic is flowing
    exit status 1
    

The egress-driver application is unable to reach the target Service because it is not injected with the sidecar proxy. We are running with --mtls-mode=strict which restricts the egress-driver to commuicating using mTLS with other injected pods. As a result we cannot build traffic statistics for these requests.

Now, let’s use NGINX Plus Ingress Controller to create a secure internal route from the egress-driver application to the target Service.

Install NGINX Plus Ingress Controller

  1. Install the NGINX Plus Ingress Controller. This tutorial will demonstrate installation as a Deployment.

  2. Verify the NGINX Plus Ingress Controller is running:

    kubectl -n nginx-ingress get pods,svc -o wide
    
    NAME                                READY   STATUS    RESTARTS   AGE    IP           NODE          NOMINATED NODE   READINESS GATES
    pod/nginx-ingress-c6f9fb95f-fqklz   1/1     Running   0          5s     10.2.2.2     node-name     <none>           <none>
    

Notice that we do not have a Service fronting NGINX Plus Ingress Controller. This is because we are using NGINX Plus Ingress Controller for egress only, which means we don’t need an external IP address.
The sidecar proxy will route egress traffic to the NGINX Plus Ingress Controller’s Pod IP.

Create an internal route to the legacy target service

To create an internal route from the NGINX Plus Ingress Controller to the legacy target Service, we need to create an Ingress resource with the annotation nsm.nginx.com/internal-route: "true".

Tip:

For this tutorial, the legacy Service is deployed in Kubernetes so the host name of the Ingress resource is the Kubernetes DNS name.

To create internal routes to services outside of the cluster, refer to creating internal routes.

Either copy and apply the Ingress resource shown below, or download and apply the target-internal-route.yaml file.

Important:
If using Kubernetes v1.18.0 or greater you must use ingressClassName in your Ingress resources. Uncomment line 9 in the resource below or the downloaded file, target-internal-route.yaml.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: target-internal-route
  namespace: legacy
  annotations:
    nsm.nginx.com/internal-route: "true"
spec:
  # ingressClassName: nginx # use only with k8s version >= 1.18.0
  tls:
  rules:
  - host: target-v1-0.legacy
    http:
      paths:
      - path: /
        backend:
          serviceName: target-v1-0
          servicePort: 80

Verify the Ingress resource has been created

kubectl -n legacy describe ingress target-internal-route

Name:             target-internal-route
Namespace:        legacy
Address:
Default backend:  default-http-backend:80 (10.2.2.2:8080)
Rules:
  Host                Path  Backends
  ----                ----  --------
  target-v1-0.legacy
                      /     target-v1-0:80 (10.0.0.0:80)
Annotations:          nsm.nginx.com/internal-route: true
Events:
  Type     Reason          Age                             From                      Message
  ----     ------          ----                            ----                      -------
  Normal   ADD             <invalid>                       loadbalancer-controller   legacy/target-internal-route
  Normal   AddedOrUpdated  <invalid> (x2 over <invalid>)   nginx-ingress-controller  Configuration for legacy/target-internal-route was added or updated
  Warning  Translate       <invalid> (x11 over <invalid>)  loadbalancer-controller   error while evaluating the ingress spec: service "legacy/target-v1-0" is type "ClusterIP", expected "NodePort" or "LoadBalancer"

Allow the egress-driver application to route egress traffic to NGINX Plus Ingress Controller

To enable the egress-driver application to send egress requests to NGINX Plus Ingress Controller, edit the egress-driver Pod and add the following annotation: config.nsm.nginx.com/default-egress-allowed: "true"

To verify that the default egress route is configured look at the logs of the proxy container:

 kubectl logs -f <EGRESS_DRIVER_POD> -c nginx-mesh-sidecar | grep "Enabling default egress route"

Test the internal route

The egress-driver should have been continually sending traffic, which will now be routed through NGINX Plus Ingress Controller.

kubectl logs -f -c egress-driver <EGRESS_DRIVER_POD>

Expectation: You should see the target service respond with the text target v1.0 and a successful response code. The script employs a verbose curl command that also displays connection and HTTP information. For example:

*   Trying 10.100.9.60:80...
* Connected to target-v1-0.legacy (10.100.9.60) port 80 (#0)
> GET / HTTP/1.1
> Host: target-v1-0.legacy
> User-Agent: curl/7.72.0-DEV
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.19.2
< Date: Wed, 23 Sep 2020 22:24:29 GMT
< Content-Type: text/plain
< Content-Length: 12
< Connection: keep-alive
<
{ [12 bytes data]
target v1.0
* Connection #0 to host target-v1-0.legacy left intact

Use the top command to check traffic metrics:

nginx-meshctl top deploy/egress-driver

Expectation: The nginx-ingress deployment will show 100% incoming success rate and the egress-driver deployment will show 100% outgoing success rate. Please keep in mind that the top command only shows traffic from the last 30s.

Deployment     Direction  Resource       Success Rate  P99  P90  P50  NumRequests
egress-driver
               To         nginx-ingress  100.00%       3ms  3ms  2ms  15

This request from the egress-driver application to target-v1-0.legacy was securely routed through the NGINX Plus Ingress Controller, and we now have visibility into the outgoing traffic from the egress-driver application!

Cleaning up

  1. Delete the legacy namespace and egress-driver application

    kubectl delete ns legacy
    kubectl delete deploy egress-driver
    
  2. Follow instructions to uninstall NGINX Plus Ingress Controller.

  3. Follow instructions to uninstall NGINX Service Mesh.