Expose a UDP Application with NGINX Plus Ingress Controller

This topic describes the steps to deploy NGINX Plus Ingress Controller for Kubernetes, to expose a UDP application within NGINX Service Mesh.


Follow this tutorial to deploy the NGINX Plus Ingress Controller with NGINX Service Mesh and an example UDP application.


  • Deploy NGINX Service Mesh.
  • Install NGINX Plus Ingress Controller.
  • Deploy the example udp-listener app.
  • Create a Kubernetes GlobalConfiguration resource to establish a NGINX Plus Ingress Controller UDP listener.
  • Create a Kubernetes TransportServer resource for the udp-listener application.
The NGINX Plus version of NGINX Plus Ingress Controller is required for this tutorial.

Install NGINX Service Mesh

Follow the installation instructions to install NGINX Service Mesh on your Kubernetes cluster. UDP traffic proxying is disabled by default, so you will need to enable it using the --enable-udp flag when deploying. Linux kernel 4.18 or greater is required.

Before proceeding, verify that the mesh is running (Step 2 of the installation instructions). NGINX Plus Ingress Controller will try to fetch certs from the Spire agent that gets deployed by NGINX Service Mesh on startup. If the mesh is not running, NGINX Plus Ingress controller will fail to start.

Install NGINX Plus Ingress Controller

  1. Install NGINX Plus Ingress Controller with the option to allow UDP ingress traffic. This tutorial will demonstrate installation as a Deployment.

    mTLS does not affect UDP communication, as mTLS in NGINX Service Mesh applies only to TCP traffic at this time.
  2. Get access to the NGINX Plus Ingress Controller by applying the udp-nodeport.yaml NodePort resource.

  3. Check the exposed port from the NodePort service just defined:

    $ kubectl get svc -n nginx-ingress
    NAME                    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
    nginx-ingress           NodePort   <none>        80:32705/TCP,443:30181/TCP   57m
    udp-listener-nodeport   NodePort    <none>        8900:31839/UDP               6m35s

    As you can see, our exposed port is 31839. We’ll use this for the remaining steps.

  4. Get the IP of one of your worker nodes:

    $ kubectl get node -o wide
    NAME                                     ... INTERNAL-IP     EXTERNAL-IP ...
    gke-aidan-dev-default-pool-f507f772-qiun ...  ...
    gke-aidan-dev-default-pool-f507f772-tjpo ...  ...

    We’ll use

At this point, you should have the NGINX Plus Ingress Controller running in your cluster; you can deploy the udp-listener example app to test out the mesh integration, or use NGINX Plus Ingress controller to expose one of your own apps.

Deploy the udp-listener App

Use kubectl to deploy the example udp-listener app.

If automatic injection is enabled, NGINX Service Mesh will inject the sidecar proxy into the application pods automatically. Otherwise, use manual injection to inject the sidecar proxies.

kubectl apply -f udp-listener.yaml

Verify that all of the Pods are ready and in “Running” status:

kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE
udp-listener-59665d7ffc-drzh2   2/2     Running   0          4s

Expose the udp-listener App

To route UDP requests to an application in the mesh through the NGINX Plus Ingress Controller, you will need both a GlobalConfiguration and TransportServer Resource.

  1. Deploy a GlobalConfiguration to configure what port to listen for UDP requests on:

    $ kubectl apply -f nic-global-configuration.yaml

    The GlobalConfiguration configures a listener to listen for UDP datagrams on a specified port.

    apiVersion: k8s.nginx.org/v1alpha1
    kind: GlobalConfiguration 
    name: nginx-configuration
    namespace: nginx-ingress
    - name: accept-udp
      port: 8900
      protocol: UDP
  2. Apply the TransportServer to configure UDP traffic to route from the GlobalConfiguration listener your udp-listener app.

    $ kubectl apply -f udp-transportserver.yaml

    This TransportServer will route requests from the listener supplied in the GlobalConfiguration to a named upstream – in this case udp-listener-upstream. Our upstream is configured to pass traffic to our udp-listener service at port 5005, where our udp-listener application lives.

    apiVersion: k8s.nginx.org/v1alpha1
    kind: TransportServer
    name: udp-listener
      name: accept-udp
      protocol: UDP
    - name: udp-listener-upstream
      service: udp-listener
      port: 5005
      udpRequests: 1
      pass: udp-listener-upstream

Send Datagrams to the udp-listener App

Now that everything for the NGINX Plus Ingress Controller is deployed, we can now send datagrams to the udp-listener application.

  1. Use the IP and port defined in the Install NGINX Plus Ingress Controller section to send a netcat UDP message:

    $ echo "UDP Datagram Message" | nc -u 31839
  2. Check that that the “UDP Datagram Message” text was correctly sent to the udp-listener server:

    $ kubectl logs udp-listener-59665d7ffc-drzh2 -c udp-listener
    Listening on UDP port 5005
    UDP Datagram Message
  3. Check that the UDP message is present in the udp-listener sidecar logs:

    $ kubectl logs udp-listener-59665d7ffc-drzh2 -c nginx-mesh-sidecar
    2022/01/22 00:09:31 SVID updated for spiffeID: "spiffe://example.org/ns/default/sa/default"
    2022/01/22 00:09:31 Enqueueing event: SPIRE, key: 0xc00007ac00
    2022/01/22 00:09:31 Dequeueing event: SPIRE, key: 0xc00007ac00
    2022/01/22 00:09:31 Reloading nginx with configVersion: 2
    2022/01/22 00:09:31 executing nginx -s reload
    2022/01/22 00:09:32 success, version 2 ensured. iterations: 4. took: 100ms
    [08/Feb/2022:19:49:02 +0000] UDP 200 0 49 0.000 "" "21" "0" "0.000"

    We’re looking for the [08/Feb/2022:19:49:02 +0000] UDP 200 0 49 0.000 "" "21" "0" "0.000" line, which includes the UDP protocol and the correct size of the UDP packet we sent.

    Notice the 49 bytes representing the incoming packet size. This correlates to the 28 bytes of headroom added to the packet to maintain original destination information. See the UDP and eBPF architecture section for more information on why this is necessary.