End of Sale Notice:
Commercial support for NGINX Service Mesh is available to customers who currently have active NGINX Microservices Bundle subscriptions. F5 NGINX announced the End of Sale (EoS) for the NGINX Microservices Bundles as of July 1, 2023.
See our End of Sale announcement for more details.
Configure a Secure Egress Route with NGINX Ingress Controller
This topic provides a walkthrough of how to securely route egress traffic through F5 NGINX Ingress Controller for Kubernetes with NGINX Service Mesh.
Overview
Learn how to create internal routes in F5 NGINX Ingress Controller to securely route egress traffic to non-meshed services.
Note:
NGINX Ingress Controller can be used for free with NGINX Open Source. Paying customers have access to NGINX Ingress Controller with NGINX Plus. To complete this tutorial, you must use either:
- Open Source NGINX Ingress Controller version 3.0+
- NGINX Plus version of NGINX Ingress Controller
Objectives
Follow this tutorial to deploy the NGINX Ingress Controller with egress enabled, and securely route egress traffic from a meshed service to a non-meshed service.
Before You Begin
-
Install kubectl.
-
Download the example files:
Install NGINX Service Mesh
Note:
If you want to view metrics for NGINX Ingress Controller, ensure that you have deployed Prometheus and Grafana and then configure NGINX Service Mesh to integrate with them when installing. Refer to the Monitoring and Tracing guide for instructions.
-
Follow the installation instructions to install NGINX Service Mesh on your Kubernetes cluster.
-
When deploying the mesh set the mTLS mode to
strict
. -
Your deploy command should contain the following flags:
nginx-meshctl deploy ... --mtls-mode=strict
-
-
Get the config of the mesh and verify that
mtls.mode
isstrict
:nginx-meshctl config
Create an Application Outside of the Mesh
The target
application is a basic NGINX server listening on port 80. It returns a “target version” value, which is v1.0
.
-
Create a namespace,
legacy
, that will not be managed by the mesh:kubectl create namespace legacy
-
Create the
target
application in thelegacy
namespace:kubectl -n legacy apply -f target-v1.0.yaml
-
Verify that the target application is running and the target pod is not injected with the sidecar proxy:
kubectl -n legacy get pods,svc NAME READY STATUS RESTARTS AGE pod/target-v1-0-5985d8544d-sgkxg 1/1 Running 0 12s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/target-v1-0 ClusterIP 10.0.0.0 <none> 80/TCP 11s
Send traffic to the target application
-
Enable automatic sidecar injection for the
default
namespace. -
Create the
sender
application in thedefault
namespace:kubectl apply -f egress-driver.yaml
-
Verify that the
egress-driver
pod is injected with the sidecar proxy.kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES egress-driver-5587fbdf78-hm4w6 2/2 Running 0 5s 10.1.1.1 node-name <none> <none>
The
egress-driver
Pod will automatically send requests to thetarget-v1-0.legacy
Service. Once started, the script will delay for 10 seconds and then begin to send requests. -
Check the Pod logs to verify that the requests are being sent:
kubectl logs -f -c egress-driver <EGRESS_DRIVER_POD>
Expectation:
You should see the
egress-driver
is not able to reachtarget
. The script employs a verbose curl command that also displays connection and HTTP information. For example:* Trying 10.16.14.126:80... * Connected to target-v1-0.legacy (10.16.14.126) port 80 (#0) > GET / HTTP/1.1 > Host: target-v1-0.legacy > User-Agent: curl/7.72.0-DEV > Accept: */* > * Received HTTP/0.9 when not allowed * Closing connection 0
-
Use the top command to check traffic metrics:
nginx-meshctl top deploy/egress-driver
Expectation: No traffic metrics are populated!
Cannot build traffic statistics. Error: no metrics populated - make sure traffic is flowing exit status 1
The egress-driver
application is unable to reach the target
Service because it is not injected with the sidecar proxy. We are running with --mtls-mode=strict
which restricts the egress-driver
to communicating using mTLS with other injected pods. As a result we cannot build traffic statistics for these requests.
Now, let’s use NGINX Ingress Controller to create a secure internal route from the egress-driver
application to the target
Service.
Install NGINX Ingress Controller
-
Install the NGINX Ingress Controller. This tutorial will demonstrate installation as a Deployment.
- Follow the instructions to enable egress
-
Verify the NGINX Ingress Controller is running:
kubectl -n nginx-ingress get pods,svc -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx-ingress-c6f9fb95f-fqklz 1/1 Running 0 5s 10.2.2.2 node-name <none> <none>
Notice that we do not have a Service fronting NGINX Ingress Controller. This is because we are using NGINX Ingress Controller for egress only, which means we don’t need an external IP address. The sidecar proxy will route egress traffic to the NGINX Ingress Controller’s Pod IP.
Create an internal route to the legacy target service
To create an internal route from the NGINX Ingress Controller to the legacy target
Service, we need to create either:
-
an Ingress resource with the annotation
nsm.nginx.com/internal-route: "true"
. -
a VirtualServer resource with the following field added to the custom resource definition:
spec: internalRoute: true
Tip:
For this tutorial, the legacy Service is deployed in Kubernetes so the host name of the Ingress/VirtualServer resource is the Kubernetes DNS name.
To create internal routes to services outside of the cluster, refer to creating internal routes.
Either copy and apply the Ingress or VirtualServer resource shown below, or download and apply the linked file.
Ingress:
Important:
If using Kubernetes v1.18.0 or greater you must useingressClassName
in your Ingress resources. Uncomment line 9 in the resource below or the downloaded file,target-internal-route.yaml
.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: target-internal-route
namespace: legacy
annotations:
nsm.nginx.com/internal-route: "true"
spec:
# ingressClassName: nginx # use only with k8s version >= 1.18.0
tls:
rules:
- host: target-v1-0.legacy
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: target-v1-0
port:
number: 80
VirtualServer:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: target-vs-internal-route
namespace: legacy
spec:
internalRoute: true
ingressClassName: nginx
host: target-v1-0.legacy
upstreams:
- name: legacy
tls:
enable: false
service: target-v1-0
port: 80
routes:
- path: /
action:
pass: legacy
Verify the Ingress or VirtualServer resource has been created:
kubectl -n legacy describe ingress target-internal-route
kubectl -n legacy describe virtualserver target-vs-internal-route
Allow the egress-driver application to route egress traffic to NGINX Ingress Controller
To enable the egress-driver
application to send egress requests to NGINX Ingress Controller, edit the egress-driver
Pod and add the following annotation:
config.nsm.nginx.com/default-egress-allowed: "true"
To verify that the default egress route is configured look at the logs of the proxy container:
kubectl logs -f <EGRESS_DRIVER_POD> -c nginx-mesh-sidecar | grep "Enabling default egress route"
Test the internal route
The egress-driver
should have been continually sending traffic, which will now be routed through NGINX Ingress Controller.
kubectl logs -f -c egress-driver <EGRESS_DRIVER_POD>
Expectation: You should see the target service respond with the text target v1.0
and a successful response code. The script employs a verbose curl command that also displays connection and HTTP information. For example:
* Trying 10.100.9.60:80...
* Connected to target-v1-0.legacy (10.100.9.60) port 80 (#0)
> GET / HTTP/1.1
> Host: target-v1-0.legacy
> User-Agent: curl/7.72.0-DEV
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.19.2
< Date: Wed, 23 Sep 2020 22:24:29 GMT
< Content-Type: text/plain
< Content-Length: 12
< Connection: keep-alive
<
{ [12 bytes data]
target v1.0
* Connection #0 to host target-v1-0.legacy left intact
Use the top command to check traffic metrics:
nginx-meshctl top deploy/egress-driver
Expectation: The nginx-ingress
deployment will show 100% incoming success rate and the egress-driver
deployment will show 100% outgoing success rate. Keep in mind that the top
command only shows traffic from the last 30s.
Deployment Direction Resource Success Rate P99 P90 P50 NumRequests
egress-driver
To nginx-ingress 100.00% 3ms 3ms 2ms 15
This request from the egress-driver
application to target-v1-0.legacy
was securely routed through the NGINX Ingress Controller, and we now have visibility into the outgoing traffic from the egress-driver
application!
Cleaning up
-
Delete the
legacy
namespace andegress-driver
applicationkubectl delete ns legacy kubectl delete deploy egress-driver
-
Follow instructions to uninstall NGINX Ingress Controller.
-
Follow instructions to uninstall NGINX Service Mesh.