Set up NGINXaaS Loadbalancer for Kubernetes

Overview

NGINXaaS Loadbalancer for Kubernetes, or NLK, is a Kubernetes controller that works with F5 NGINX as a Service for Azure to act as an external load balancer to direct traffic into Kubernetes. NGINXaaS acts similar to a Service with type=LoadBalancer but NGINX can operate at either L4 (stream) or L7 (HTTP). You remain in control of the NGINX config while NLK dynamically populates the servers in the NGINX upstreams.

flowchart TB

   Users[웃 Users] -.-> |GET '/tea' | NGINXaaS{NGINXaaS}
   NGINXaaS -.-> P1
   NLK --> |Update upstream 'tea'| NGINXaaS

   subgraph AK[Azure Kubernetes Cluster]
      TeaSvc{Tea svc} -.-> P2(Pod)
      TeaSvc -.-> P1(Pod)
      k8sapi[K8s API] --> |watch| NLK(NLK controller)
   end

   style Users color:orange,stroke:orange,fill:#faefd9
   linkStyle 0,1 color:orange,stroke:orange
   style NLK color:green,stroke:green,stroke-width:4px,fill:#d9fade
   style NGINXaaS color:green,stroke:green,stroke-width:4px,fill:#d9fade
   linkStyle 2 color:green,stroke:green
   style AK fill:#9bb1de,color:#
   style k8sapi color:#3075ff,stroke:#3075ff,stroke-width:4px
   linkStyle 5 color:#3075ff,stroke:#3075ff

   accDescr: A diagram showing users sending GET requests to NGINXaaS, which proxies traffic to a Kubernetes-based service named "TeaSvc" running multiple pods in an Azure Kubernetes Cluster, with upstream configurations dynamically managed via an NLK controller watching the Kubernetes API.
 

The NLK controller watches Kubernetes Services and sends upstream information to dynamically populate an NGINX Upstream. NGINXaaS ensures that NGINX receives this dynamic state on initial receipt, as well as in the event of NGINXaaS scaling or repair.

Example Use Case: I’m using NGINX as a Service for rate limiting and NGINX App Protect, but I have configured it to pass all accepted traffic to Kubernetes where an in-cluster ingress controller routes to specific workloads.

Example Use Case: I’m using NGINX as a Service receive incoming traffic on api.example.com. NGINX passes traffic matching location /login to an upstream that is the “login” service in a Kubernetes cluster. NGINX passes traffic matching location /graph to an upstream that is the “graph” service in a different Kubernetes cluster. A third location, “/process”, is passed to an app server running on a standalone virtual machine.

Getting Started

This guide provides an initial setup of pairing NGINXaaS and Azure Kubernetes Service. It uses some default values to get a working environment as quickly as possible. See Advanced Configuration below for in-depth configuration details.

Before following the steps in this guide, you must:

The NGINXaaS deployment’s delegated subnet and the AKS nodes must be able to communicate with each other. For example, on the same Azure Virtual Network or on peered Virtual Networks.

Initial Connection

The steps in this section must be completed once for each new setup. We will install a small controller in the Kubernetes cluster and authorize that to send updates to the NGINXaaS deployment.

  1. Create an NGINXaaS data plane API key
  2. Look up the NGINXaaS data plane API endpoint
  3. Install the NLK controller

Create an NGINXaaS data plane API key

Note:

Data plane API key requirements:

  • Always has an expiration date. The default is six months from the date of creation. The maximum is two years from the date of creation.
  • Minimum length: 12 characters
  • May contain ASCII letters, symbols, and numbers.
  • Requires three out of four of the following types of characters lowercase characters, uppercase characters, symbols, or numbers.

A UUID v4 is a solid choice with over 120bits of entropy.

The data plane API key can be created using the Azure CLI or portal.

Create an NGINXaaS data plane API key using the Azure portal
  1. Go to your NGINXaaS for Azure deployment.
  2. Select NGINX API keys in the left menu.
  3. Select New API Key.
  4. Provide a name for the new API key in the right panel and select an expiration date.
  5. Select the Add API Key button.
  6. Copy the value of the new API key.
Note:
This value will only be available directly after key creation.
Create an NGINXaaS data plane API key using the Azure CLI

Set shell variables about the name of the NGINXaaS you’ve already created:

## Customize this to provide the details about my already created NGINXaaS deployment
nginxName=myNginx
nginxGroup=myNginxGroup

Generate a new random data plane API key:

# Make a new random key
keyName=myKey
keyValue=$(uuidgen --random)

Assign the new key to your NGINXaaS deployment:

az rest --method PUT \
  --uri "/subscriptions/{subscriptionId}/resourceGroups/$nginxGroup/providers/NGINX.NGINXPLUS/nginxDeployments/$nginxName/apiKeys/$keyName?api-version=2024-09-01-preview" \
  --body "{\"properties\": {\"secretText\": \"$keyValue\"}}"

Look up the NGINXaaS data plane API endpoint

The data plane API endpoint can be retrieved using the Azure CLI or portal.

Look up the NGINXaaS data plane API endpoint using the Azure portal
  1. Go to your NGINXaaS for Azure deployment.
  2. Select NGINX API keys in the left menu.
  3. The data plane API endpoint associated with the deployment is available at the top of the screen.
Look up the NGINXaaS data plane API endpoint using the Azure CLI
dataplaneAPIEndpoint=$(az resource show --api-version "2024-09-01-preview" --resource-type "Nginx.NginxPlus/nginxDeployments" -g "$nginxGroup" -n "$nginxName" --query properties.dataplaneApiEndpoint -o tsv)
Note:
The az-cli is being updated to include these commands directly. Until then we can use az rest to directly talk to the Azure API.

Install the NLK controller

The NLK controller can be installed in your Kubernetes cluster using either Helm or the AKS Extension.

Install the NLK controller using Helm

Install the NLK controller using helm install. Be sure your kubectl context is pointed at the correct cluster.

helm install nlk oci://registry-1.docker.io/nginxcharts/nginxaas-loadbalancer-kubernetes --version 0.6.0 \
  --set "nlk.dataplaneApiKey=${keyValue}" \
  --set "nlk.config.nginxHosts=${dataplaneAPIEndpoint}nplus"
Install the AKS Extension using the Azure CLI

Install the NLK controller using az k8s-extension.

## Customize this to provide the details about my already created AKS cluster
aksName=myCluster
aksGroup=myClusterGroup
az k8s-extension create \ 
  --name nlk \
  --extension-type "nginxinc.nginxaas-aks-extension" \ 
  --scope cluster \ 
  --cluster-name ${aksName} \ 
  --resource-group ${aksGroup} \ 
  --cluster-type managedClusters \ 
  --plan-name f5-nginx-for-azure-aks-extension \ 
  --plan-product f5-nginx-for-azure-aks-extension \
  --plan-publisher f5-networks \
  --release-namespace nlk \
  --config nlk.dataplaneApiKey=${keyValue} \
  --config nlk.config.nginxHosts=${dataplaneAPIEndpoint}nplus
Install the AKS Extension using the Azure portal

You can also install the NLK controller AKS extension by navigating to F5 NGINXaaS Loadbalancer for Kubernetes in the Azure Marketplace and following the installation steps.

  • Select Get it now and Continue.

  • On the Create F5 NGINXaaS AKS extension Basics tab, provide the following information:

    Field Description
    Subscription Select the appropriate Azure subscription.
    Resource group Select the AKS cluster’s resource group.
  • Select Cluster Details, and provide the AKS cluster name.

  • Select Application Details, and provide the following information:

    Field Description
    Cluster extension resource name Provide a name for the NLK controller.
    Installation namespace Provide the AKS namespace for the NLK controller.
    Allow minor version upgrades of extension Select whether to allow the extension to be upgraded automatically to the latest minor version.
    NGINXaaS Dataplane API Key Provide the previously generated Dataplane API key value: {keyValue}
    NGINXaaS Dataplane API Endpoint Provide the previously retrieved Dataplane API endpoint value: {dataplaneAPIEndpoint}nplus
  • Select Review + Create to continue.

  • Azure will validate the extension settings. This page will provide a summary of the provided information. Select Create.

Note:
The NGINXaaS API that NLK uses is mounted at ${dataplaneAPIEndpoint}nplus. For example, if the data plane API endpoint is https://mynginx-75b3bf22a555.eastus2.nginxaas.net/ then the value for nlk.config.nginxHosts should be https://mynginx-75b3bf22a555.eastus2.nginxaas.net/nplus.

Create an NGINX config with a dynamic upstream

You must define an NGINX upstream for NLK to populate.

For an upstream to be compatible with NLK it must by completely dynamic, that is:

  • The upstream cannot have any servers listed in it– the controller will fill in servers dynamically.
  • The upstream must have a shared memory zone defined.
  • The upstream must have a state file declared.

For example:

http {
  upstream my-service {
    zone my-service 64K;          # required
    state /tmp/my-service.state;  # required

    # (Optional) Enable keepalive to the upstream to avoid connection setup on every request
    # https://www.f5.com/company/blog/nginx/avoiding-top-10-nginx-configuration-mistakes#no-keepalives
    keepalive 16;
  }

  # Don't forget to route traffic to it!
  server {
    listen 80;
    location / {
        proxy_pass http://my-service;

        proxy_http_version 1.1;
        proxy_set_header   "Connection" "";
    }
  }
}

Apply the config!

Create a Kubernetes Service

Expose a Kubernetes Service to route traffic to your workload. The Service has a few requirements:

  • Add the annotation: nginx.com/nginxaas: nginxaas to mark the service to be monitored by NLK.
  • type: NodePort tells Kubernetes to open a port on each node.
  • The port name must be formatted as {{NGINX Context}}-{{NGINX upstream name}}. For example:
    • If the upstream is in the http context and named my-service then the name is http-my-service
    • If the upstream is in the stream context and named jet then the port name is stream-jet
apiVersion: v1
kind: Service
metadata:
  name: my-service
  annotations:
    # Let the controller know to pay attention to this service.
    # If you are connecting multiple controller the value can be used to distinguish them
    nginx.com/nginxaas: nginxaas
spec:
  # expose a port on the nodes
  type: NodePort
  ports:
    - targetPort: http
      protocol: TCP
      # The port name helps connect to NGINXaaS. It must be prefixed with either `http-` or `stream-`
      # and the rest of the name must match the name of an upstream in that context.
      name: http-my-service
  selector:
    app: awesome

Advanced Configuration

Controller Configuration

Helm Value Description Value
nlk.config.logLevel How verbose should the NLK controller logs be. Possible values are debug, info, warn, error. Default: warn
nlk.config.nginxHosts The NGINX Plus APIs to send upstream updates to. Should be set to {{dataplaneApiEndpoint}}nplus
nlk.config.serviceAnnotationMatch The value to match on a Service’s nginx.com/nginxaas annotation. Useful when configuring multiple NLK controllers to update separate NGINXaaS deployemnts. Default: nginxaas.
nlk.dataplaneApiKey The NGINXaaS data plane API key that will authorize the controller to talk to your NGINXaaS deployment

Multiple NLK controllers

Multiple clusters

A single NGINXaaS deployment can direct trafifc to multiple different AKS clusters. Each AKS cluster needs its own copy of NLK installed and connected to NGINXaaS.

flowchart TB

   TeaUsers[웃 Users] -.-> |GET /tea | NGINXaaS{NGINXaaS}
   CoffeeUsers[웃 Users] -.-> |GET /coffee | NGINXaaS
   NGINXaaS -.-> |GET /tea| E
   H --> |Update upstream 'tea'| NGINXaaS
   NGINXaaS -.-> |GET /coffee| K
   M --> |Update upstream 'coffee'| NGINXaaS

   subgraph SG2[Azure Kubernetes Cluster 2]
      k8sapi2[K8s API] --> |watch| M(NLK controller)
      I{Coffee svc} -.-> J(Pod)
      I -.-> K(Pod)
   end

   subgraph SG1[Azure Kubernetes Cluster 1]
      k8sapi1[K8s API] --> |watch| H(NLK controller)
      D{Tea svc} -.-> E(Pod)
      D -.-> F(Pod)
   end


   style TeaUsers color:red,stroke:red,fill:#faefd9
   linkStyle 0,2 color:red,stroke:red
   style CoffeeUsers color:orange,stroke:orange,fill:#faefd9
   linkStyle 1,4 color:orange,stroke:orange
   style NGINXaaS color:green,stroke:green,stroke-width:4px,fill:#d9fade
   linkStyle 3,5 color:green,stroke:green
   style SG1 fill:#9bb1de,color:#
   style SG2 fill:#9bb1de,color:#
   style k8sapi1 color:#3075ff,stroke:#3075ff,stroke-width:4px
   style k8sapi2 color:#3075ff,stroke:#3075ff,stroke-width:4px
   linkStyle 6,9 color:#3075ff,stroke:#3075ff
   style H color:green,stroke:green,stroke-width:4px,fill:#d9fade
   style M color:green,stroke:green,stroke-width:4px,fill:#d9fade

   accDescr:A diagram showing NGINXaaS directing separate user GET requests for `/tea` and `/coffee` to respective Kubernetes-based services "TeaSvc" and "CoffeeSvc" that are running in separate Azure Kubernetes Clusters. An NLK controller in each cluster is independently updating the NGINXaaS with dynamic upstream configuration.
 
Note:
Configuring multiple NLK controllers to update the same upstream isn’t supported and will result in unpredictable behavior.

Multiple NGINXaaS deployments

Multiple NLK controllers can be installed in the same AKS cluster to update separate NGINXaaS deployments.

Each NLK needs a unique helm release name and needs a unique helm value for nlk.config.serviceAnnotationMatch. Each NLK will only watch services that have the matching annotation.

Troubleshooting

NGINXaaS Loadbalancer for Kubernetes and NGINXaaS continually monitor and attempt to repair in case of error. However, if upstreams are not populated as expected here are a few things you can look for.

NLK controller logs

The controller reports status information about the requests it is making to NGINXaaS. This is a good place to look to ensure that the controller has picked up your service and that it is communicating with NGINXaaS correctly.

View the logs as you would any other deployment in Kubernetes. For example, kubectl logs deployment/nlk-nginxaas-loadbalancer-kubernetes.

The logs can be made more verbose by setting the Helm value nlk.config.logLevel (see Controller Configuration).

Metrics

The metrics that NGINXaaS reports can be used to find more information. Especially useful metrics include:

  • plus.http.upstream.peers.state.up – does the peer report being healthy
  • plus.http.upstream.peers.request.count – which peers are handling requests


Last modified November 20, 2024