Releases

Note:

FIPS compliant images are currently impacted by compatibility issues with a dependent library.

We recommend against:

  1. Patching older FIPS images, which could re-introduce the incompatible dependency.
  2. Building new custom FIPS images.
Note:
In our next major release, v4.0.0, the default log library for NGINX Ingress Controller will be changed from golang/glog to log/slog. This will mean that logs generated by NGINX Ingress Controller will be in a structured format with the option to choose a string or json output. This will not affect logs generated by NGINX. To ensure backwards compatibility, we will ensure the existing log format, glog, will be maintained through a configuration option for the next 3 releases.
Important:

CRD version removal notice. In our next major release, v4.0.0, support for the following apiVersions for these listed CRDs will be dropped:

  1. k8s.nginx.org/v1alpha for GlobalConfiguration
  2. k8s.nginx.org/v1alpha for Policy
  3. k8s.nginx.org/v1alpha for TransportServer

Prior to upgrading, please ensure that any of these resources deployed as apiVersion: k8s.nginx.org/v1alpha1 are upgraded to apiVersion: k8s.nginx.org/v1 If a resource of kind: GlobalConfiguration, kind: Policy or kind: TransportServer are deployed as apiVersion: k8s.nginx.org/v1alpha1, these resources will be deleted when upgrading from, at least, v3.4.0 to v4.0.0

When v4.0.0 is released, the release notes will contain the required upgrade steps to go from v3.X.X to v4.X.X

3.7.1

06 Nov 2024

Fixes

  • 6735 Add nil check to apikey suppliedIn
  • 6761 Add OIDC fix for ID token nonce claim validation

Dependencies

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.25-1.31.


3.7.0

30 Sept 2024

Added support for VirtualServer & TransportServer to listen on a specific IP when configuring a listener, allowing NGINX to bind to a specific interface. This is also useful in for scenarios where pods need to connect to multiple networks i.e. multi-homed. Allow an End Session Endpoint to be configured for OIDC providers via Policy. This allows a user to be fully logged out from their idp session. This change also adds support for configuring a post-logout redirect URI, allowing a users to be redirected to a custom logout page.

The access_log directive can now be configured to point to a syslog log server. Previously, access logs defaulted to standard out. This change allows for log parsers aggregators to ingest access logs from NGINX.

When installing NGINX Ingress Controller via Helm, a uniquely named lease object will be created automatically. This allows for multiple deployments of NGINX Ingress Controller in the same namespace when leader election is enabled, without requiring a unique name to be specified manually for each deployment.

Features

  • 5968 Add BUILD_OS to Telemetry
  • 6014 Sync oidc repo
  • 6092 Support End Session Endpoint for OIDC and allow customizable Post-logout Redirect URI
  • 5648 Make access_log in http context configurable
  • 6180 Add ip as an option to listeners for VirtualServer
  • 6367 Add ip as an option to listeners for TransportServer

Fixes

  • 5786 Change log level, to Info and above, before calling prometheus exporter functions
  • 5838 Fix api key policy undefined routes
  • 5885 Add default telemetry endpoint
  • 5899 GRPC healthcheck should not have keepalive time
  • 6125 Don’t log errors for not implemented grpc metrics
  • 6232 Fix panic when creating VirtualServer
  • 6372 Create unique lease obj for each NIC installed via Helm
  • 6406 Fix udp/http listener validation logic
  • 6446 Disable batch reload when batch finishes

Helm Chart

  • 5817 Remove include-year and includeYear flag
  • 5335 Choose NodePort values for controller.service.type = LoadBalancer
  • 6235 Update helm docs by @vepatel

Dependencies

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.25-1.31.


3.6.2

19 Aug 2024

Fixes

  • 6125 Don’t log errors for not implemented grpc metrics
  • 6223 Re-order mounting debian apt source file

Dependencies

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.25-1.31.


3.6.1

04 Jul 2024

Fixes

  • 5921 GRPC healthcheck should not have keepalive time
  • 5889 Add default telemetry endpoint

Dependencies

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.25-1.30.


3.6.0

25 Jun 2024

Added support for the latest generation of NGINX App Protect Web Application Firewall, v5. NGINX Ingress Controller will continue to support the NGINX App Protect v4 family to allow customers to implement new Policy Bundle workflow at their own pace. NGINX App Protect WAF v5 does not accept the JSON based policies, instead requiring users to compile a Policy Bundle outside of the NGINX Ingress Controller pod. Policy bundles contain a combination of custom Policy, signatures, and campaigns. Bundles can be compiled using either App Protect compiler, or NGINX Instance Manager. Learn more here, https://docs.nginx.com/nginx-ingress-controller/installation/integrations/app-protect-waf-v5/.

With this release, NGINX Ingress Controller is implementing a new image maintenance policy. Container images for subscribed users will be updated on a regular basis in-between releases to reduce the CVE vulnerabilities. Customers can observe the 3.6.x tag when listing images in the registry and select the latest image to update to for the current release.

Features

  • 5698, 5771 & 5784 Add support for F5 NGINX AppProtect WAF v5
  • 5580 & 5752 Add APIKey Authentication policy
  • 5205 Preserve valid listeners when invalid listeners are present in GlobalConfiguration
  • 5366 Add proxy-set-headers annotation for ingress
  • 5406, 5408, 5418, 5404 & 5415 Add additional telemetry data

Fixes

  • 5350 Fix ap-waf flag in error message
  • 5318 Don’t reload when use-cluster-ip endpoints update, and change the ingress use-cluster-ip implementation to use the cluster ip instead of the fqdn
  • 5375 Fix status for invalid vs and vsr, for weight changes dynamic reload

Helm Chart

  • 5313 Update helm flag in docs for enableWeightChangesDynamicReload

Dependencies

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.25-1.30.


3.5.2

31 May 2024

Important

  • Bundles compiled on NAP WAF versions <= v4.8.x are not compatible with NAP WAF versions >= 4.9.x, this release of NIC includes NAP WAF v4.10 so recompilation of policy bundles is required. JSON based WAF Policies aren’t affected with this change.

Dependencies

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.25-1.30.

3.5.1

08 May 2024

Fixes

  • 5463 Don’t reload when use-cluster-ip endpoints update
  • 5464 Fix status for invalid vs and vsr, for weight changes dynamic reload
  • 5470 Add support for named ports in ingresses which use-cluster-ip

Helm Chart

  • 5315 Update helm flag in docs for enableWeightChangesDynamicReload

Dependencies

  • 5511 & 5391 Go updates
  • 5490 Pin app-protect module version to 4.8.1

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.23-1.29.

3.5.0

26 Mar 2024

NGINX Ingress Controller and NGINX App Protect WAF users can can now view violations through NGINX Instance Manager Security Monitor. Security Monitor can be used to build Policy bundles, reducing reload time impacts on NGINX Ingress Controller. Read more information in NGINX App Protect WAF Bundles and Security Monitoring.

When using NGINX Plus for two version split rollouts, you can now control progressive rollouts of a new backend version without reloading NGINX using the -weight-changes-dynamic-reload command line argument.

The use-cluster-ip annotation is now available for the Ingress resource. use-cluster-ip supports service meshes and specific use cases where the backend service should be the target instead of individual backend service pods, bypassing upstream load balancing.

Features

  • 5179 & 5051 Add NIM Security Dashboard integration for App Protect WAF security violations
  • 5212 Weight changes Dynamic Reload
  • 4862 Add use-cluster-ip annotation for ingress resources
  • 4660 Add annotations for controlling request rate limiting
  • 5083 Update default values for keepalive-requests and keepalive-timeout
  • 5084 Allow securityContext and podSecurityContext to be configurable via helm parameters
  • 5199 Update zone size for transportserver resource
  • 4896, 5095, 5147, 5155, 5170, 5176, 5217, 5245, 5237, 5256, 5167 & 5261 Export Telemetry data to XCDF

Fixes

  • 5211 Move set above rewrite to fix uninitialized variable
  • 5175 Initialize stopCh channel for ExternalDNS
  • 5053 Ensure backup server is removed from upstreams when the Backup Service is deleted

Helm Chart

  • 5159 Refactor volumes and volumeMounts to common helpers
  • 5179 Move common pod label definitions to helpers

Dependencies

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.23-1.29.

3.4.3

19 Feb 2024

Fixes

  • 5008 Remove redundant Prometheus variable labels
  • 4744 Fixed validation for VSR exact & regex subroutes. Thanks to jo-carter.
  • 4832 Fix new lines in snippets
  • 5020 Fix template file spacing for ssl_protocols directive
  • 5041 Allow waf users to build without dos repo access

Helm Chart

  • 4953 Add docs links to helm NOTES.txt

Dependencies

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.23-1.29.

3.4.2

16 Jan 2024

Fixes

4934 GCR & AWS Plus image publishing fix

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.23-1.29.

3.4.1

15 Jan 2024

Dependencies

4886 Update N+ to R31 4886 Bump Go dependencies.

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.23-1.29.

3.4.0

19 Dec 2023

The default_server listeners for ports 80 and 443 can now be fully customized giving you the flexibility to shift the HTTP and HTTPS default listeners to other ports as your needs require.

Traffic splits now support weights from 0 - 100 giving you the control that you expect when performing canary roll outs of your back end services.

A new capability of “upstream backup” has been introduced for NGINX Plus customers. This gives you the control to set a backup service for any path. This takes advantage of NGINX health checks and will automatically forward traffic to the backup service when all pods of the primary service stop responding.

Dynamic reloading of SSL certificates takes advantage of native NGINX functionality to dynamically load updated certificates when they are requested and thus not require a reload when certificates update.

A number of Helm enhancements have come directly from our community and range from giving greater flexibility for HPA, namespace sharing for custom sidecars, and supporting multiple image pull secrets for greater deployment flexibility.

To make sure NGINX Ingress Controller follows Helm best practices, we’ve refactored our helm chart location. You can now find our helm charts under charts\nginx-ingress.

We’ve added the functionality to define App Protect WAF bundles for VirtualServers by creating policy bundles and putting them on a mounted volume accessible from NGINX Ingress Controller.

Features

  • 4574 Graduate TransportServer and GlobalConfiguration to v1.
  • 4464 Allow default_server listeners to be customised.
  • 4526 Update use of http2 listen directive to align with deprecation.
  • 4276 Use Lease for leader election.
  • 4655 Support weights 0 and 100 in traffic splitting.
  • 4653 Add support for backup directive for VS and TS.
  • 4788 Dynamic reload of SSL certificates
  • 4428 Add option for installing CRDs from a single remote yaml.

Fixes

  • 4504 Delete the DNSEndpoint resource when VS is deleted & Ratelimit requeues on errors.
  • 4575 update dockerfile for debian NGINX Plus.

Helm Chart

  • 4306 Refactor Helm Chart location.
  • 4391 Add HPA Custom Behavior. Thanks to saedx1.
  • 4559 Add process namespace sharing for ingress controller. Thanks to panzouh.
  • 4651 Add initContainerResources Helm configuration.
  • 4656 Allows multiple imagePullSecrets in the helm chart. Thanks to AlessioCasco.

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.22-1.29.


3.3.2

1 Nov 2023

Fixes

  • 4578 Update Dockerfile to add user creation for NGINX Plus images.

Dependencies

  • 4572 Update NGINX version to 1.25.3.
  • 4569, 4591 Bump Go dependencies.

Upgrade


3.3.1

13 Oct 2023

Overview

This releases updates NGINX Plus to R30 P1 and dependencies to mitigate HTTP/2 Rapid Reset Attack vulnerability CVE-2023-44487.

Dependencies

Upgrade


3.3.0

26 Sep 2023

Overview

With release 3.3, NGINX Ingress Controller continues to advance capabilities for an ever-demanding set of use cases that go beyond simple layer 7 routing for services running exclusively in Kubernetes.

When involved in diagnostic operations and viewing the NGINX Plus console or when viewing the enhanced NGINX Plus metrics through Prometheus, customers now enjoy the added dimension of the backend service being available to aide in identification of issues as well as observing performance.

50% of our users continue to rely heavily on the Ingress resource and its “mergeable Ingress” usage pattern, to enhance the experience for these customers we have added the path-regex annotation with support for case sensitive, case insensitive, as well as exact regex match patterns.

Prometheus continues to be the most popular metrics platform for Kubernetes users. To further enhance ease of setting up integration with Prometheus we have finalized support for the Prometheus serviceMonitor capability. Providing better scraping controls for Prometheus admins.

For our most demanding customers performing a blue / green upgrade of the Ingress Controller itself supports the ability to provide their business customers an enhanced experience with no loss of session fidelity. Support for this pattern and others has been added through Helm chart enhancement that allows two deployments to share a single ingressClass resource and duplicate the same configuration.

To accommodate these enhancements, several new values have been added to our Helm chart, as well as modifications to existing values. Due to the potential impacts of these changes we have issued a major release to the Helm chart, advancing to v1.0.0

To better align with the demands of supporting additional protocols such as MQTT and QUIC, NGINX Ingress Controller is changing how listeners are defined for HTTP traffic. You have always had controls over the ports defined for TCP/UDP traffic through the GlobalConfiguration and TransportServer objects. That same flexibility has been introduced for HTTP/S traffic and the VirtualServer. This area will continue to expand to give customers full control over NGINX listeners so they can tailor to their specific needs and policies.

Features

  • 4023 Read Prometheus key/cert from memory.
  • 4080 Expose Location Zones metrics.
  • 4127, 4200, 4223 Add path-regex annotation for ingress.
  • 4108 Add command line argument for custom TLS Passthrough port.
  • 4271 Add custom listener controls to VirtualServer.

Fixes

  • 4160 Update JWT/JWKS policy validation.
  • 4371 Improve runtime batch reloads.

Helm Chart

  • 3977 Add support for controller.selectorLabels. Thanks to Youqing Han.
  • 4058 Add clusterIP to service if specified in values. Thanks to EutiziStefano.
  • 4252 Make containerPort and hostPort customizable.
  • 4331 Expose Prometheus metrics through a headless Service.
  • 4351 Update helm values file to move controller.serviceMonitor to prometheus.serviceMonitor.
  • 4333 Allow installing IC without creating a new ingress class.

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.22-1.28.


3.2.1

17 Aug 2023

Dependencies

  • Update NGINX version to 1.25.2.
  • Update NGINX Plus version to R30.
  • Update Go to 1.21 and Go dependencies.

Upgrade


3.2.0

27 June 2023

Features

  • 3790 Gunzip for VS
  • 3863 OIDC - relaxed OIDC scope validation
  • 3925 Specify runAsNonRoot in daemon-set manifests. Thanks to Valters Jansons.
  • 3951 Add NGINX Plus images to Google Marketplace.
  • 3954 Add utilization tracking for supported (paid) customers.
  • 4001 Add support for the SameSite sticky cookie attribute.
  • 4022 Add document to tutorial section for configuring the default OIDC implementation.
  • 4031 Add NGINX Plus Alpine image with FIPS inside for supported (paid) customers.

Fixes

  • 3737 Update VirtualServer to ignore CRL for EgressMTLS.
  • 3798 Update VirtualServer template to generate an internal jwt auth location per policy applied.
  • 3844 Fix gunzip support for VS and add python tests.
  • 3870 Add Funcs() method to UpdateVirtualServerTemplate method. Thanks to Bryan Hendryx.
  • 3933 fix –external-service flag when using serviceNameOverride. Thanks to Tim N.

Dependencies

  • Update NGINX version to 1.25.1.
  • Update Debian to 12 for NGINX Plus images (except for images containing the NGINX App Protect modules).
  • Update Alpine to 3.18 for NGINX Plus images.

Helm Chart

  • 3814 Remove semverCompare for allocateLoadBalancerNodePorts. Thanks to Alex Wied.
  • 3905 Reverse order of NAPDOS maxDaemons and maxWorkers in Helm chart.

Upgrade

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.22-1.27.


3.1.1

04 May 2023

Overview

This release reverts the changes made in 3.1.0 to use sysctls to bind to lower level ports without the NET_BIND_SERVICE capability. It also adds support for serviceNameOverride in the Helm chart, that can be used to override the service name for NGINX Ingress Controller. This is useful especially during an upgrade from versions prior to 3.1.0, to avoid downtime due to the service name change. To use this feature, set the serviceNameOverride value in the Helm chart to the name of the existing service.

For example, if the existing service name is my-release-nginx-ingress, you can use --set serviceNameOverride=my-release-nginx-ingress when running the upgrade command. Here is an example upgrade command that keeps the existing service name my-release-nginx-ingress for a deployment named my-release:

helm upgrade my-release oci://ghcr.io/nginxinc/charts/nginx-ingress --version 0.17.1 --set serviceNameOverride=my-release-nginx-ingress

Fixes

  • 3737 Update VirtualServer to ignore CRL for EgressMTLS.
  • 3722 Inherit NET_BIND_SERVICE from IC to Nginx. Thanks to Valters Jansons.
  • 3798 Update VirtualServer template to generate an internal jwt auth location per policy applied.

Features

  • 3491 Egress via Ingress VirtualServer Resource.

Dependencies

  • Update NGINX version to 1.23.4.
  • Update NGINX Plus version to R29.

Helm Chart

  • 3602 Updated NGINX Service Mesh references in Helm templates. Thanks to Jared Byers.
  • 3773 Swap cpu and memory in HPA template. Thanks to Bryan Hendryx.
  • 3802 Add serviceNameOverride. Thanks to Tim N.
  • 3815 Fix GlobalConfiguration name in Helm Chart.
  • 3862 Add correct indentation to controller-leader-election configmap helm template.

Upgrade


3.1.0

29 Mar 2023

Overview

  • Beginning with release 3.1.0 the NET_BIND_SERVICE capability is no longer used, and instead relies on net.ipv4.ip_unprivileged_port_start sysctl to allow port binding. Kubernetes 1.22 or later is required for this sysctl to be classified as safe. Ensure that you are using the latest updated deployment and daemonset example yaml files available in the repo.
  • The minimum supported version of Kubernetes is now 1.22. NGINX Ingress Controller now uses sysctls to bind to lower level ports without additional privileges. This removes the need to use NET_BIND_SERVICE to bind to these ports. Thanks to Valters Jansons for making this feature possible!
  • Added support for loading pre-compiled AppProtect Policy Bundles when using the -enable-app-protect cli argument. This feature removes the need for the Ingress Controller to compile NGINX App Protect Policy when NGINX App Protect Policy is updated.
  • IngressMTLS policy now supports configuring a Certificate Revocation Lists(CRL). When using this feature requests made using a revoked certificate will be rejected. See Using a Certificate Revocation List for details on configuring this option.
  • NGINX Ingress Controller now supports running with a Read-only Root Filesystem. This improves the security posture of NGINX Ingress Controller by protecting the file system from unknown writes. See Configure root filesystem as read-only for details on configuring this option with both HELM and Manifest. Thanks to Valters Jansons for making this feature possible!
  • HELM deployments can now set custom environment variables with controller.env. Thanks to Aaron Shiels for making this possible!
  • HELM deployments can now configure a pod disruption budget allowing deployments to configure either a minimum number or a maximum unavailable number of pods. Thanks to Bryan Hendryx for making this possible!
  • NGINX Ingress Controller uses the latest OIDC reference implementation which now supports forwarding access tokens to upstreams / backends. Thanks to Shawn Kim for making this possible!
  • The default TLS secret is now optional. This improves the security posture of NGINX Ingress Controller through enabling ssl_reject_handshake. This has the impact of immediately terminating the SSL handshake and not revealing TLS or cypher settings to calls that do not match a configured hostname.

Features

  • 3034 Allow extra args to be provided to the OIDC auth endpoint. Thanks to Alan Wilkie.
  • 3474 Add access token support in the OIDC. Thanks to Shawn Kim.
  • 3326 Add support for custom environment variables on the Nginx Controller container. Thanks to Aaron Shiels.
  • 3527 Change controller.topologySpreadConstraints schema to array. Thanks to Marco Londero.
  • 3248 Add Pod disruption budget option to HELM based installations. Thanks to Bryan Hendryx.
  • 3462 Add initial support for SSL termination for TransportServer.
  • 3451 Enable keepalive-time for healthchecks in VS and VSR.
  • 3560 Add support for load a pre-compiles AppProtect Policy Bundle.
  • 3632 Update nginx.org/ca secret type & crl field to IngressMTLS to support CRL.
  • 3629 Use the “runtime default” seccomp profile. Thanks to Valters Jansons.
  • 3573 Rework port binding logic without privileges. Thanks to Valters Jansons.
  • 3646 Remove app protect agent.
  • 3507 Support empty path for ImplementationSpecific pathType.
  • 3482 Use new NSM Spiffe and Cert rotation library.
  • 3442 Add websocket protocol option to monitor directive.
  • 3674 Move NAP DoS chart to new repo.
  • 3302 Make default-server-secret optional.
  • 3586 Add new labels and metadata to add version information to pods.

Fixes

  • 3463 Support non-vs created Challenge Ingress.
  • 3475 Ensure leader election is correctly disabled when option is set to false in helm template.
  • 3481 Add missing OSS internal routes for integration with NSM.
  • 3541 Ensure non-ready endpoints are not added to upstreams.
  • 3583 Update keyCache path for JWKs to avoid conflict with OIDC.
  • 3607 Clear Content-Length headers for requests processed by internal JWKS routes.
  • 3660 Remove unwanted chars from label value.

Helm Chart

  • 3581 Push edge Helm Chart to OCI registries.
  • 3449 Correct values.schema.json nodeSelector.
  • 3448 Fix Helm Chart Schema for priorityClassName.
  • 3519 Add OnDelete to allowed strategy values.
  • 3537 Update schema references to k8s v1.26.1.
  • 3606 Fix Helm Chart labels and templates. Move version update to labels.

Upgrade

  • Make sure the Kubernetes version is in the supported platforms listed below.
  • For NGINX, use the 3.1.0 images from our DockerHub, GitHub Container, Amazon ECR Public Gallery or Quay.io.
  • For NGINX Plus, use the 3.1.0 images from the F5 Container registry or build your own image using the 3.1.0 source code.
  • For Helm, use version 0.17.0 of the chart.

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.22-1.26.


3.0.2

13 Feb 2023

Fixes

  • 3519 Add OnDelete to allowed strategy values
  • 3541 Ensure non-ready endpoints are not added to upstreams
  • 3527 Fix controller.topologySpreadConstraints schema, thanks to Marco Londero

Upgrade


3.0.1

25 Jan 2023

Fixes

  • 3448 Fix Helm Chart Schema for priorityClassName
  • 3449 Correct nodeSelector in the Helm Chart Schema
  • 3463 Support non-VS created Challenge Ingress
  • 3481 Add missing OSS internal routes for NGINX Service Mesh

Upgrade


3.0.0

12 January 2023

Overview

  • Added support for Deep Service Insight for VirtualServer and TransportServer using the -enable-service-insight cli argument.
  • The minimum supported version of Kubernetes is now 1.21. NGINX Ingress Controller 3.0.0 removes support for k8s.io/v1/Endpoints API in favor of discovery.k8s.io/v1/EndpointSlices. For older Kubernetes versions, use the 2.4.x release of the Ingress Controller.
  • Added support for EndpointSlices.
  • Added support to dynamically reconfigure namespace watchers using labels -watch-namespace-label and watching secrets using the -watch-secret-namespace cli arguments.
  • Allow configuration of NGINX directives map-hash-bucket-size and map-hash-max-size using the ConfigMap resource .
  • Added support for fetching JWKs from a remote URL to dynamically validate JWT tokens and optimize performance through caching.
  • Beginning with NGINX Service Mesh release 1.7 it will include support for the free version of NGINX Ingress Controller as well as the paid version.
  • NGINX Ingress Controller + NGINX App Protect Denial of Service is now available through the AWS Marketplace.

Breaking Changes

  • 3260 Added support for EndpointSlices.

Features

  • 3299 Support Dynamic namespaces using Labels.
  • 3261 Deep service insight endpoint for VirtualServer CR.
  • 3361 Added healthcheck for TransportServer CR.
  • 3347 Import JWKS from URL on JWT policy.
  • 3274 Allow configuration of map-hash-bucket-size and map-hash-max-size directives.
  • 3376 NGINX Service Mesh will support the free version of NGINX Ingress Controller when using NGINX open source.
  • 3170 Watch subset of namespaces for secrets. Thanks to Hans Feldt.
  • 3341 Set value of $remote_addr to client IP when TLSPassthrough and Proxy Protocol are enabled.
  • 3131 NAP DoS images are now available in the AWS Marketplace.
  • 3231 Always print build info and flags used at the start to provide better supportability.
  • 2735 Support default client proxy headers to be overwritten in VirtualServer. Thanks to Alex Wied.
  • 3133 Added caseSensitiveHttpHeaders to APPolicy CRD. Thanks to Pavel Galitskiy.

Fixes

  • 3139 Remove all IPV6 listeners in ingress resources with -disable-ipv6 command line.

Helm Chart

  • 3113 Added JSON Schema.
  • 3143 Added annotations for deployment and daemonset.
  • 3136 Added controller.dnsPolicy. Thanks to Dong Wang.
  • 3065 Added annotations to the service account. Thanks to 0m1xa.
  • 3276 Added horizontalpodautoscaler. Thanks to Bryan Hendryx.

Upgrade

  • Make sure the Kubernetes version is in the supported platforms listed below.
  • For NGINX, use the 3.0.0 images from our DockerHub, GitHub Container or Amazon ECR Public Gallery.
  • For NGINX Plus, use the 3.0.0 images from the F5 Container registry or the AWS Marketplace or build your own image using the 3.0.0 source code.
  • For Helm, use version 0.16.0 of the chart. Helm does not upgrade the CRDs. If you’re using custom resources like VirtualServer and TransportServer (controller.enableCustomResources is set to true), after running the helm upgrade command, run kubectl apply -f deployments/helm-chart/crds to upgrade the CRDs.

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.21-1.26.


2.4.2

30 Nov 2022

Dependencies

  • Update NGINX Plus version to R28.
  • Update NGINX App Protect WAF version to 4.0.
  • Update NGINX App Protect DoS version to 3.1.

Upgrade


2.4.1

19 October 2022

Dependencies

  • 3183 Update NGINX version to 1.23.2.
  • 3175 Update Go dependencies.
  • Update NGINX Plus version to R27 P1.

Fixes

  • 3139 Remove all IPV6 listeners in ingress resources with -disable-ipv6 command line.

Upgrade


1.12.5

19 October 2022

Dependencies

  • Update NGINX version to 1.23.2.
  • Update NGINX Plus version to R27 P1.
  • Update Alpine to 3.16.
  • Update Go to 1.19 and Go dependencies.

Upgrade

  • For NGINX, use the 1.12.5 image from our DockerHub: nginx/nginx-ingress:1.12.5, nginx/nginx-ingress:1.12.5-alpine or nginx/nginx-ingress:1.12.5-ubi
  • For NGINX Plus, please build your own image using the 1.12.5 source code.
  • For Helm, use version 0.10.5 of the chart.

2.4.0

04 October 2022

Overview

  • Added support for enabling proxy_protocol when port 443 is being used for both HTTPS traffic and TLS Passthrough traffic.
  • Updates to the TransportServer resource to support using ExternalName services. For examples, see externalname-services.
  • VirtualServer resource now supports wildcard hostname.
  • NGINX Ingress Controller images including the combined NGINX AppProtect WAF and NGINX AppProtect DoS solutions are now published to our registry. See Images with NGINX Plus for a detailed list of images in our registry.
  • Added support for watching multiple namespaces using the -watch-namespace cli argument. This can configured by passing a comma-separated list of namespaces to the -watch-namespace CLI argument (e.g. -watch-namespace=ns-1,ns-2).
  • A new cli argument has been added: -include-year. This appends the current year to the log output from the Ingress Controller. Example output: I20220512 09:20:42.345457.
  • Post-startup configuration reloads have been optimized to reduce traffic impacts. When many resources are modified at the same time, changes are combined to reduce the number of data plane reloads.

Features

  • 2986 Batch reloads at runtime.
  • 2914 Support watching multiple namespaces.
  • 2884 Include year in logs.
  • 2993 Accept proxy protocol when TLS passthrough enabled.
  • 3041 Support external name service for TransportServer.
  • 2939 Add support for wildcard hostname in VirtualServer.
  • 3040 Add command line argument to manually disable IPV6 listeners for unsupported clusters.
  • 3088 Filter secrets of type helm.sh/release.v1.

Fixes

  • 2971 fix: Correct error message on missing path in path validation. Thanks to Zachary Seguin.
  • 3095 do not create configmap if customConfigMap is used. Thanks to Bryan Hendryx.

Helm Chart

  • 3087 Allow omitting the default server secret from Helm installs.
  • 2831 Add ServiceMonitor to Helm Chart. Thanks to araineUnity.
  • 2855 Add initialDelaySeconds to helm charts. Thanks to Daniel Edgar.
  • 2979 Allow to specify image with digest in helm chart. Thanks to Hans Feldt.
  • 3031 Adding automountServiceAccountToken to helm chart.

Upgrade

  • For NGINX, use the 2.4.0 images from our DockerHub, GitHub Container or Amazon ECR Public Gallery.
  • For NGINX Plus, use the 2.4.0 images from the F5 Container registry or the AWS Marketplace or build your own image using the 2.4.0 source code.
  • For Helm, use version 0.15.0 of the chart. If you’re using custom resources like VirtualServer and TransportServer (controller.enableCustomResources is set to true), after you run the helm upgrade command, the CRDs will not be upgraded. After running the helm upgrade command, run kubectl apply -f deployments/helm-chart/crds to upgrade the CRDs.

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and that passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.19-1.25.


2.3.1

16 September 2022

Dependencies

  • 3048 Bump NGINX to 1.23.1
  • 3049 Update Go dependencies.

Helm Chart

  • 3031 Add automountServiceAccountToken to helm chart

Upgrade


2.3.0

12 July 2022

Overview

  • Support making VirtualServer resources discoverable via public DNS servers using external-dns. Examples for configuring external-dns with NGINX Ingress Controller can be found here.
  • Resolves CVE-2022-30535. This vulnerability impacted the visibility of secrets accessible by NGINX Ingress Controller. In some cases, secrets visible to NGINX Ingress Controller could be exposed to any authenticated user with permission to create and update Ingress objects. This vulnerability affected Ingress objects only - our Custom Resources (VirtualServer and TransportServer) were not affected. Customers unable to upgrade should migrate any Ingress resources to VirtualServer resources where possible, and use RBAC to restrict write access for users for Ingress objects.
  • Support using HTTP basic authentication with VirtualServer and Ingress resources. Special thanks to Simon Wachter.
  • Support HTTP01 type ACME Issuers for use with VirtualServer resources with cert-manager.

Features

  • 2581 Add OpenTracing to all Debian and Alpine based images.
  • 2328 Add handling of multiple log destinations.
  • 2691 AP: log-conf escaping chars.
  • 2759 Add support for HTTP01 Challenges on VirtualServer resources.
  • 2762 Add DNSEndpoint CRD for integration with ExternalDNS.
  • 2801 Add SBOMs to release.
  • 2269 HTTP basic auth support. Thanks to Simon Wachter.
  • 2800 Integrate external-dns with VirtualServer resources.
  • 2583 Add runAsNonRoot in deployments.
  • 2484 Add container resource requests.
  • 2627 Update InternalRoute server_name.
  • 2742 Add additional unit tests to confirm special characters can’t be used in the lb-method annotation.
  • 2730 Add string sanitisation for proxy-pass-headers & proxy-hide-headers.
  • 2733 Add string validation to server-tokens annotation.
  • 2734 Validate rewrite annotation.
  • 2754 Validate JWT key, realm and login url for ingress resources annotations.
  • 2751 Add string validation to sticky-cookie-services annotation.
  • 2775 Add validation to Ingress path.
  • 2774 Sanitize nginx.com/jwt-token.
  • 2783 Update validation regex for path spec.
  • 2781 Report Hostname in ExternalEndpoint for VS and VSR resources.

Fixes

  • 2617 Fix Dockerfile for amd64 microarchitectures.
  • 2637 Add terminationGracePeriodSeconds to deployment. Thanks to Maksym Iv.
  • 2654 Sync changes from OIDC repo, add field in policy.
  • 2673 Fix status.loadbalancer.hostname deletion on OOMKill. Thanks to Heiko Voigt.
  • 2718 Fix cases where CM enabled but no TLS block specified in VS.

Helm Chart

  • 2418 Add support for allocateLoadBalancerNodePorts, ipFamilyPolicy and ipFamilies. Thanks to centromere.
  • 2672 Add minReadySeconds & strategy support. Thanks to Ciaran.
  • 2625 allow configuring topologySpreadConstraints in Helm chart. Thanks to Kamil Domański.

Upgrade

  • For NGINX, use the 2.3.0 images from our DockerHub, GitHub Container or Amazon ECR Public Gallery.
  • For NGINX Plus, use the 2.3.0 images from the F5 Container registry or the AWS Marketplace or build your own image using the 2.3.0 source code.
  • For Helm, use version 0.14.0 of the chart. If you’re using custom resources like VirtualServer and TransportServer (controller.enableCustomResources is set to true), after you run the helm upgrade command, the CRDs will not be upgraded. After running the helm upgrade command, run kubectl apply -f deployments/helm-chart/crds to upgrade the CRDs.
  • When upgrading using the manifests, make sure to update the ClusterRole. This is required to enable the ExternalDNS for VirtualServer resources integration.

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and which passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.19-1.24.


2.2.2

23 May 2022

Fixes

  • 2627 Update InternalRoute server_name.

Upgrade

  • For NGINX, use the 2.2.2 images from our DockerHub, GitHub Container or Amazon ECR Public Gallery.
  • For NGINX Plus, use the 2.2.2 images from the F5 Container registry or build your own image using the 2.2.2 source code.
  • For Helm, use version 0.13.2 of the chart.

2.2.1

17 May 2022

Dependencies

  • Update Go dependencies.

Fixes

  • 2654 Sync changes from nginx-openid-connect repo, add zoneSyncLeeway field in policy. For more information on the fixes, see 52.

Upgrade

  • For NGINX, use the 2.2.1 images from our DockerHub, GitHub Container or Amazon ECR Public Gallery.
  • For NGINX Plus, use the 2.2.1 images from the F5 Container registry or build your own image using the 2.2.1 source code.
  • For Helm, use version 0.13.1 of the chart.

2.2.0

12 April 2022

Overview

  • Support for automatic provisioning and management of Certificate resources for VirtualServer resources using cert-manager. Examples for configuring cert-manager with NGINX Ingress Controller can be found here. Please note that HTTP01 type ACME Issuers are not yet supported for use with VirtualServer resources.

  • Full support for IPv6 using the NGINX Ingress Controller VirtualServer and VirtualServerRoute custom resources, and Ingress resources.

  • The -enable-preview-policies cli argument has been deprecated and is no longer required for the usage of any Policy resource type. This argument will be removed completely in v2.6.0.

  • A new -enable-oidc cli argument has been added to enable OIDC policies. Previously, this behaviour was achieved through the usage of the -enable-preview-policies cli argument.

Features

  • 2576 Add support for IPv6.
  • 2572 Automate provisioning of Certificate resources for VirtualServer resources using cert-manager.
  • 2346 Use os.ReadDir for lightweight directory reading. Thanks to Eng Zer Jun.
  • 2360 Add NGINX App Protect reconnect period directive.
  • 2479 Add cli argument to configure NGINX App Protect log level.
  • 2455 Increase memory available for NGINX App Protect xml parser.
  • 2580 Create -enable-oidc command line argument for OIDC policy.
  • 2566 Unbind policy from preview policies.
  • 2582 Rename Make targets from openshift to ubi.

Fixes

  • 2378 Fix healthcheck ports.
  • 2404 Start nginx with -e stderr parameter.
  • 2414 Fix in file nginx-plus.virtualserver.tmpl ApDosMonitor->ApDosMonitorURI.

Helm Chart

  • 2525 Extend helm chart to include NGINX Service Mesh fields.
  • 2294 Add extra containers to helm chart. Thanks to Márk Sági-Kazár.

Upgrade

  • For NGINX, use the 2.2.0 images from our DockerHub, GitHub Container or Amazon ECR Public Gallery.
  • For NGINX Plus, use the 2.2.0 images from the F5 Container registry or the AWS Marketplace or build your own image using the 2.2.0 source code.
  • For Helm, use version 0.13.0 of the chart. If you’re using custom resources like VirtualServer and TransportServer (controller.enableCustomResources is set to true), after you run the helm upgrade command, the CRDs will not be upgraded. After running the helm upgrade command, run kubectl apply -f deployments/helm-chart/crds to upgrade the CRDs.
  • When upgrading using the manifests, make sure to update the ClusterRole. This is required to enable the cert-manager for VirtualServer resources integration.
  • The -enable-preview-policies cli argument has been deprecated, and is no longer required for any Policy resources.
  • Enabling OIDC Policies now requires the use of -enable-oidc cli argument instead of the -enable-preview-policies cli argument.

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and which passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.19-1.23.


2.1.2

29 March 2022

Dependencies

  • Update UBI based images to 8.

Upgrade


1.12.4

23 March 2022

Dependencies

  • Update NGINX version to 1.21.6.
  • Update NGINX Plus version to R26.
  • Update Debian to Bullseye.
  • Update Alpine to 3.15.
  • Update UBI to 8.
  • Update Go to 1.17 and Go dependencies.

Fixes

  • Fix OpenTracing not working with NGINX Plus.

Upgrade

  • For NGINX, use the 1.12.4 image from our DockerHub: nginx/nginx-ingress:1.12.4, nginx/nginx-ingress:1.12.4-alpine or nginx/nginx-ingress:1.12.4-ubi
  • For NGINX Plus, please build your own image using the 1.12.4 source code.
  • For Helm, use version 0.10.4 of the chart.

2.1.1

17 February 2022

Dependencies

  • Update NGINX version to 1.21.6.
  • Update NGINX Plus version to R26.

Upgrade

  • For NGINX, use the 2.1.1 images from our DockerHub, GitHub Container or Amazon ECR Public Gallery.
  • For NGINX Plus, use the 2.1.1 images from the F5 Container registry or build your own image using the 2.1.1 source code.
  • For Helm, use version 0.12.1 of the chart.

2.1.0

06 January 2022

Overview

  • Support for NGINX App Protect Denial of Service protection with NGINX Ingress Controller. More information about NGINX App Protect DoS. Examples for configuring NGINX App Protect DoS with NGINX Ingress Controller can be found here.

  • Full support for gRPC services using the NGINX Ingress Controller VirtualServer and VirtualServerRoute custom resource definitions. This makes configuring and supporting gRPC services much easier, giving a simple YAML configuration and removing the need for snippets. Resource definition examples for gRPC can be found here.

  • Implementation of NGINX mandatory and persistent health checks in VirtualServer and VirtualServerRoute to further reduce interruptions to your service traffic as configuration changes continuously happen in your dynamic Kubernetes environment(s). Health checks have been extended to include mandatory and persistent fields. Mandatory health checks ensures that a new upstream server starts receiving traffic only after the health check passes. Mandatory health checks can be marked as persistent, so that the previous state is remembered when the Ingress Controller reloads NGINX Plus configuration. When combined with the slow-start parameter, the mandatory health check give a new upstream server more time to connect to databases and “warm up” before being asked to handle their full share of traffic. See the settings here. More about the NGINX Plus mandatory and persistent health check features. Mandatory health checks can be marked as persistent, so that the previous state is remembered when reloading configuration. When combined with the slow-start parameter, it gives a new service pod more time to connect to databases and “warm up” before being asked to handle their full share of traffic. See the settings here. More about the NGINX Plus mandatory and persistent health check features

Features

  • 2251 Enable setting mandatory and persistent in upstream healthchecks in VS and VSR.
  • 2241 Add support for NGINX App Protect DoS.
  • 2200 Add Alpine image with OpenTracing.
  • 2178 Support healthchecks in gRPC upstreams.
  • 2110 Support gRPC in the Upstreams of the virtual server resources. Particular thanks to Chiyu Zhong for all their work.
  • 2149 Add metric about total number of TransportServers.
  • 2100 Add support for initContainers. Thanks to Gunnar Scherf.
  • 1827 Add support for wildcard cert in VirtualServer resources. Thanks to Simon Wachter.
  • 2107 Add option to download the NGINX Ingress Controller binary. Introduced a new TARGET download in the Makefile which can be used when building the NGINX Ingress Controller Docker image. With this option the Ingress Controller binary will be downloaded instead of built from source.
  • 2044 Upload NGINX Ingress Controller binaries to release.
  • 2094 AP: update appolicies crd.
  • 2216 Add grpc_status to the logs.
  • 2237 Unbind app-protect from -preview-policies.
  • 2273 Make the resource comparison more informative in case of an error. Thanks to Andrey Karpov.
  • 2124 Apply -enable-snippets cli arg to Ingresses. This PR extends the existing -enable-snippets cli argument to apply to Ingress resources. If snippets are not enabled, the Ingress Controller will reject any Ingress resources with snippets annotations. Previously, the argument only applied to VirtualServer, VirtualServerRoute and TransportServer resources. Please Note: this is a breaking change. See the UPGRADE instructions below.

Fixes

  • 2267 Fix URI rewrite in VirtualServers and VirtualServerRoutes.
  • 2260 Check if refresh token is undefined and do not store it in this case. Thanks to tippexs for the fix.
  • 2215 enableSnippets should not depend from enableCustomResources. Thanks to Alessio Casco for the fix.
  • 1934 AP: fix watch-namespace for NAP resources.
  • 2125 Allow empty string in server-tokens annotation for NGINX Plus.
  • 2042 Use release specific repo for NGINX Plus on Debian.

Dependencies

  • 2173 Update Debian to Bullseye.
  • Update NGINX Plus version to R25.
  • Update NGINX version to 1.21.5.

Upgrade

  • For NGINX, use the 2.1.0 images from our DockerHub, GitHub Container or Amazon ECR Public Gallery.
  • For NGINX Plus, use the 2.1.0 images from the F5 Container registry or build your own image using the 2.1.0 source code.
  • For Helm, use version 0.12.0 of the chart.
  • We changed the behaviour of snippets in Ingress resources by extending the existing -enable-snippets cli argument to apply to Ingress resources as well as VirtualServer, VirtualServerRoute and TransportServer resources. Because the default value of -enable-snippets is false, if you are using snippets in Ingress resources, you must explicitly set the -enable-snippets to true before upgrading the Ingress Controller, so that the new version of the Ingress Controller doesn’t reject Ingresses with snippets annotations.

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and which passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.19-1.23.


2.0.3

28 October 2021

Breaking Changes

  • 2124 Apply -enable-snippets cli arg to Ingresses. This PR extends the existing -enable-snippets cli argument to apply to Ingress resources. If snippets are not enabled, the Ingress Controller will reject any Ingress resources with snippets annotations. Previously, the argument only applied to VirtualServer, VirtualServerRoute and TransportServer resources. Please Note: this is a breaking change. See the UPGRADE instructions below.

Fixes

  • 2132 Install libcurl on OpenTracing for NGINX Plus.

Upgrade

  • For NGINX, use the 2.0.3 image from our DockerHub: nginx/nginx-ingress:2.0.3, nginx/nginx-ingress:2.0.3-alpine or nginx/nginx-ingress:2.0.3-ubi
  • For NGINX Plus, please build your own image using the 2.0.3 source code.
  • For Helm, use version 0.11.3 of the chart.
  • We changed the behaviour of snippets in Ingress resources by extending the existing -enable-snippets cli argument to apply to Ingress resources as well as VirtualServer, VirtualServerRoute and TransportServer resources. Because the default value of -enable-snippets is false, if you are using snippets in Ingress resources, you must explicitly set the -enable-snippets to true before upgrading the Ingress Controller, so that the new version of the Ingress Controller doesn’t reject Ingresses with snippets annotations.

1.12.3

28 October 2021

Fixes

  • 2133 Use release specific repo for the App Protect packages on Debian. This fixes an error when building Debian-based images with NGINX Plus with App Protect: previously, building an image would fail with the error nginx-plus-module-appprotect : Depends: app-protect-plugin (= 3.639.0-1~buster) but 3.671.0-1~buster is to be installed. The bug first appeared when NGINX App Protect version 3.6 was released on 13 October 2021.
  • 2134 Apply -enable-snippets cli arg to Ingresses. This PR extends the existing -enable-snippets cli argument to apply to Ingress resources. If snippets are not enabled, the Ingress Controller will reject any Ingress resources with snippets annotations. Previously, the argument only applied to VirtualServer, VirtualServerRoute and TransportServer resources. Please Note: this is a breaking change. See the UPGRADE instructions below.

Upgrade

  • For NGINX, use the 1.12.3 image from our DockerHub: nginx/nginx-ingress:1.12.3, nginx/nginx-ingress:1.12.3-alpine or nginx/nginx-ingress:1.12.3-ubi
  • For NGINX Plus, please build your own image using the 1.12.3 source code.
  • For Helm, use version 0.10.3 of the chart.
  • We changed the behaviour of snippets in Ingress resources by extending the existing -enable-snippets cli argument to apply to Ingress resources as well as VirtualServer, VirtualServerRoute and TransportServer resources. Because the default value of -enable-snippets is false, if you are using snippets in Ingress resources, you must explicitly set the -enable-snippets to true before upgrading the Ingress Controller, so that the new version of the Ingress Controller doesn’t reject Ingresses with snippets annotations.

2.0.2

13 October 2021

Dependencies

  • Update NGINX App Protect version to 3.6.
  • Update NGINX Plus version to R25 in NGINX App Protect enabled images.

Features

  • 2074 Update JWT library to golang-jwt/jwt. Previously, the Ingress Controller used dgrijalva/jwt-go, which has a vulnerability CVE-2020-26160. Note that the Ingress Controller wasn’t affected by this vulnerability, and the jwt library was used only in the NGINX Plus images from AWS Marketplace for Containers.

Upgrade

  • For NGINX, use the 2.0.2 image from our DockerHub.
  • For NGINX Plus, use the 2.0.2 from the F5 Container registry or build your own image using the 2.0.2 source code.
  • For Helm, use version 0.11.2 of the chart.

2.0.1

07 October 2021

Fixes

  • 2051 Use release specific repo for NGINX Plus on Debian. This fixes an error when building the Debian-based image with NGINX Plus and App Protect: previously, building the image would fail with the error Package 'nginx-plus-r24' has no installation candidate.

Upgrade

  • For NGINX, use the 2.0.1 image from our DockerHub.
  • For NGINX Plus, use the 2.0.1 from the F5 Container registry or build your own image using the 2.0.1 source code.
  • For Helm, use version 0.11.1 of the chart.

1.12.2

7 October 2021

Fixes

  • 2048 Use release specific repo for NGINX Plus on Debian. This fixes an error when building Debian-based images with NGINX Plus: previously, building an image would fail with the error Package 'nginx-plus-r24' has no installation candidate. The bug first appeared when NGINX Plus R25 was released on 28 September 2021.

Upgrade

  • For NGINX, use the 1.12.2 image from our DockerHub: nginx/nginx-ingress:1.12.2, nginx/nginx-ingress:1.12.2-alpine or nginx/nginx-ingress:1.12.2-ubi
  • For NGINX Plus, please build your own image using the 1.12.2 source code.
  • For Helm, use version 0.10.2 of the chart.

2.0.0

28 September 2021

Overview

Release 2.0.0 includes:

  • Support for Ingress networking.k8s.io/v1. Kubernetes 1.22 removes support for networking.k8s.io/v1beta1. To support Kubernetes 1.22, NGINX Ingress Controller 2.0 is also compatible with only the networking.k8s.io/v1 version of the Ingress and IngressClass resources. This has the following implications:
    1. The minimum supported version of Kubernetes is now 1.19. For older Kubernetes versions, use the 1.12.x release of the Ingress Controller.

    2. For Kubernetes versions 1.19-1.21, you can continue using the networking.k8s.io/v1beta1 of the Ingress and IngressClass resources.

    3. For Kubernetes 1.22, you need to migrate your Ingress and IngressClass resources to networking.k8s.io/v1.

    4. If you are using the deprecated kubernetes.io/ingress.class annotation in your Ingress resources, it is recommended to switch to the ingressClassName field.

      We migrated all our documentation and examples to use networking.k8s.io/v1 and the ingressClassName field of the Ingress resource.

  • Scalability improvements. We improved the time for an Ingress Controller pod to become ready and start receiving traffic. This is especially noticeable when you have hundreds of Ingress or other configuration resources like VirtualServers: instead of several minutes or more in rare cases, a pod will become ready within a few minutes.
  • Documentation improvements We changed the look and feel of our documentation at https://docs.nginx.com/nginx-ingress-controller as well as the underlying publishing technology, which will allow us to bring even more improvements in the next releases.
  • Upgrade path for k8s.nginx.org/v1alpha1 Policy resource If you’re running release 1.9.0 and using the k8s.nginx.org/v1alpha1 Policy, the Ingress Controller now supports an upgrade path from v1alpha1 to v1 Policy version without downtime. See UPDATING POLICIES section below.

You will find the complete changelog for release 2.0.0, including bug fixes, improvements, and changes below.

Breaking Changes

  • 1850 Support Ingress and IngressClass v1.

Features

  • 1908 Add NTLM support to VirtualServer and VirtualServerRoute upstreams.
  • 1746 Add ingressClassName field to Policy.
  • 1956 Add v1alpha1 version back to policy CRD.
  • 1907 Remove libs compilation for OpenTracing in Dockerfile; add Zipkin and Datadog in addition to the already supported Jaeger tracer; additionally, for NGINX we now publish a Docker image with the tracers and the OpenTracing module on DockerHub: nginx-ic/nginx-plus-ingress:1.12.0-ot. Also thanks to MatyRi for upgrading OpenTracing in 1883.
  • 1788 Reload only once during the start. This significantly reduces the time it takes for an Ingress Controller pod to become ready when hundreds of Ingress or other supported resources are created in the cluster.
  • 1721 Increase default reload timeout to 60s: the Ingress Controller will wait for 60s for NGINX to start or reload. Previously, the default was 4 seconds.
  • 2009 Increase default upstream zone size for NGINX Plus. See the INCREASED UPSTREAM ZONES section below.

Fixes

  • 1926 Fix increased IC pod startup time when hundreds of VirtualServerRoutes are used
  • 1712 Allow make to build image when .git directory is missing.

Documentation

  • 1932 Add IAM instructions for NGINX Plus AWS Marketplace images.
  • 1927 Fix function name comments typo. Thanks to Sven Nebel.
  • 1898 Add instructions for configuring MyF5 JWT as a Docker registry secret for the F5 Container registry for NGINX Plus images.
  • 1851 Update docs and examples to use networking.k8s.io/v1.
  • 1765 Create documentation for pulling NGINX Plus images from the F5 Container registry.
  • 1740 Publish docs using Hugo and Netlify.
  • 1702 Add security recommendations documentation.

Helm Chart

  • Add new parameters to the Chart: controller.pod.extraLabels. Added in 1884.

Dependencies

  • 1855 Update minimum Kubernetes version to 1.19; remove the -use-ingress-class-only command-line argument, which doesn’t work with Kubernetes >= 1.19.
  • Update NGINX Plus version to R25. Note: images with NGINX App Protect will continue to use R24 until App Protect 3.6 is released.
  • Update NGINX version to 1.21.3.

Upgrade

  • For NGINX, use the 2.0.0 image from our DockerHub.
  • For NGINX Plus, use the 2.0.0 from the F5 Container registry or build your own image using the 2.0.0 source code.
  • For Helm, use version 0.11.0 of the chart.

See the complete list of supported images for NGINX and NGINX Plus on the Technical Specifications page.

INCREASED UPSTREAM ZONES

We increased the default size of an upstream zone from 256K to 512K to accommodate a change in NGINX Plus R25. The change makes NGINX Plus allocate more memory for storing upstream server (peer) data, which means upstream server zones will use more memory to account for that new data.

The increase in the zone size is to prevent NGINX Plus configuration reload failures after an upgrade to release 1.13.0. Note that If a zone becomes full, NGINX Plus will fail to reload and fail to add more upstream servers via the API.

The new 512K default value will be able to hold ~270 upstream servers per upstream, similarly to how the old 256K value was able to hold the same number of upstream servers in the previous Ingress Controller releases. You can understand the utilization of the upstream zones via NGINX Plus API and the NGINX Plus dashboard (the shared zones tab).

If you have a large number of upstream in the NGINX Plus configuration of the Ingress Controller, expect that after an upgrade NGINX Plus will consume more memory: +256K per upstream. If you don’t have upstreams with huge number of upstream serves and you’d like to reduce the memory usage of NGINX Plus, you can configure the upstream-zone-size ConfigMap key with a lower value. Additionally, the Ingress resource supports nginx.org/upstream-zone-size annotation to configure zone sizes for the upstreams of an Ingress resource rather than globally.

UPDATING POLICIES

This section is only relevant if you’re running release 1.9.0 and planning to upgrade to release 2.0.0.

Release 1.10 removed the k8s.nginx.org/v1alpha1 version of the Policy resource and introduced the k8s.nginx.org/v1 version. This means that to upgrade to release 1.10 users had to re-create v1alpha1 Policies with the v1 version, which caused downtime for their applications. Release 2.0.0 brings back the support for the v1alpha1 Policy, which makes it possible to upgrade from 1.9.0 to 2.0.0 release without causing downtime:

  • If the Policy is marked as a preview feature in the documentation, make sure the -enable-preview-policies command-line argument is set in 2.0.0 Ingress Controller.
  • During the upgrade, the existing Policies will not be removed.
  • After the upgrade, make sure to update the Policy manifests to k8s.nginx.org/v1 version.

Please also read the release 1.10 changelog for the instructions on how to update Secret resources, which is also necessary since some of the Policies reference Secrets.

Note that 2.1.0 will remove support for the v1alpha1 version of the Policy.

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and which passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.19-1.22.


1.12.1

8 September 2021

Dependencies

  • Update NGINX App Protect version to 3.5.

Upgrade

  • For NGINX, use the 1.12.1 image from our DockerHub: nginx/nginx-ingress:1.12.1, nginx/nginx-ingress:1.12.1-alpine or nginx/nginx-ingress:1.12.1-ubi
  • For NGINX Plus, use the 1.12.1 image from the F5 Container Registry - see the documentation here
  • Alternatively, you can also build your own image using the 1.12.1 source code.
  • For Helm, use version 0.10.1 of the chart.

1.12.0

30 June 2021

Overview

Release 1.12.0 includes:

  • The introduction of pre-built containers for advanced capabilities with NGINX Plus through the F5 Container Registry.
  • TransportServer supports TCP/UDP connections through the NGINX streams module adding support for matching specific health check response patterns for granular availability testing of your application, maximum connections to protect your applications from overload, supporting fine tuning of load balancing behavior, and snippets for advanced capability support as soon as you are ready to implement.
  • Availability through the AWS Container marketplace supporting Elastic Kubernetes Service.
  • NGINX App Protect capabilities have been extended to support the latest version and its capabilities.

You will find the complete changelog for release 1.12.0, including bug fixes, improvements, and changes below.

Features

  • 1633 Support match in TransportServer health checks.
  • 1619 Add AWS Marketplace Entitlement verification.
  • 1480 Add max connections to TransportServer.
  • 1479 Add load balancing method to TransportServer.
  • 1466 Support snippets in TransportServer.
  • 1578 Add support for CSRF protection in APPolicy.
  • 1513 Support multiple log security configs in Ingresses.
  • 1481 Add support for user defined browsers in APPolicy.
  • 1411 Add unary gRPC support in APPolicy.
  • 1671 Simplify Dockerfile stages for Debian.
  • 1652 Add HTTPS option to Prometheus endpoint.
  • 1646 Improve Dockerfile.
  • 1574 Add Docker image for Alpine with NGINX Plus.
  • 1512 Don’t require default server TLS secret.
  • 1500 Support ssl_reject_handshake in Ingress and VS.
  • 1494 Add logs around NGINX Plus binary/flag mismatch.
  • 1492 Update the IC so that GlobalConfiguration is not mandatory when configured.
  • 1500 Support ssl_reject_handshake in Ingress and VS. Previously, to handle missing or invalid TLS Secrets in Ingress and VirtualServer resources, the Ingress Controller would configure NGINX to break any attempts for clients to establish TLS connections to the affected hosts using ssl_ciphers NULL; in the NGINX configuration. The method didn’t work for TLS v1.3. Now the Ingress Controller uses ssl_reject_handshake on;, which works for TLS v1.3.
  • Documentation improvements: 1649.

Fixes

  • 1658 Add missing njs module to the openshift-image-nap-plus image.
  • 1654 Fix incorrect configuration and unexpected warnings about Secrets at the IC start.
  • 1501 Fix ungraceful shutdown of NGINX.
  • Documentation fixes: 1668, 1594 thanks to shaggy245, 1563, 1551.

Helm Chart

  • Add new parameters to the Chart: prometheus.scheme, prometheus.secret. Added in 1652.

Dependencies

  • 1604 Update NGINX Plus to R24. Previously, the Dockerfile had a fixed NGINX Plus version. Now the Dockerfile has a floating version that corresponds to the latest major NGINX Plus version. In the event of a patch version of NGINX Plus being released, make sure to rebuild your image to get the latest version (previously, we released a new Ingress Controller release in that case). Additionally, the AppProtect related packages are no longer fixed – the Dockerfile will always install the latest version of the packages that work with the latest NGINX Plus version.
  • Update NGINX version to 1.21.0.

Upgrade

  • For NGINX, use the 1.12.0 image from our DockerHub: nginx/nginx-ingress:1.12.0, nginx/nginx-ingress:1.12.0-alpine or nginx-ingress:1.12.0-ubi
  • For NGINX Plus, please build your own image using the 1.12.0 source code.
  • For Helm, use version 0.10.0 of the chart.

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and which passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.16-1.21.


1.11.3

25 May 2021

Dependencies

  • Update NGINX version to 1.21.0.

Upgrade

  • For NGINX, use the 1.11.3 image from our DockerHub: nginx/nginx-ingress:1.11.3, nginx/nginx-ingress:1.11.3-alpine or nginx/nginx-ingress:1.11.3-ubi
  • For NGINX Plus, please build your own image using the 1.11.3 source code.
  • For Helm, use version 0.9.3 of the chart.

1.11.2

19 May 2021

Dependencies

  • Update NGINX Plus version to R23 P1.

Upgrade

  • For NGINX, use the 1.11.2 image from our DockerHub: nginx/nginx-ingress:1.11.2, nginx/nginx-ingress:1.11.2-alpine or nginx/nginx-ingress:1.11.2-ubi
  • For NGINX Plus, please build your own image using the 1.11.2 source code.
  • For Helm, use version 0.9.2 of the chart.

1.11.1

7 April 2021

Dependencies

  • Update NGINX version to 1.19.9.
  • Update the OpenSSL libraries used in the UBI images

Upgrade

  • For NGINX, use the 1.11.1 image from our DockerHub: nginx/nginx-ingress:1.11.1, nginx/nginx-ingress:1.11.1-alpine or nginx/nginx-ingress:1.11.1-ubi
  • For NGINX Plus, please build your own image using the 1.11.1 source code.
  • For Helm, use version 0.9.1 of the chart.

1.11.0

31 March 2021

Overview

Release 1.11.0 includes:

  • Native NGINX Ingress Controller App Protect (WAF) policy
  • TransportServer improvements in terms of reliability, added features and operational aspects
  • Integration of NGINX Ingress Controller with Istio service mesh

You will find the complete changelog for release 1.11.0, including bug fixes, improvements, and changes below.

Features

  • 1317 Add status field to Policy resource.
  • 1449 Add support for ClusterIP in upstreams in VirtualServers/VirtualServerRoutes.
  • 1413 Add serverSnippets to TransportServer.
  • 1425 Add status field to TransportServer resource.
  • 1384 Add active health checks to TransportServer.
  • 1382 Add passive health checks to TransportServer.
  • 1346 Add configurable timeouts to TransportServer.
  • 1297 Support custom return in the default server. Thanks to 030.
  • 1378 Add WAF Policy.
  • 1420 Support IngressClassName in TransportServer.
  • 1415 Handle host and listener collisions for TransportServer resource.
  • 1322 Improve VirtualServer/VirtualServerRoute warnings for Policies.
  • 1288 Add stricter validation for some ingress annotations.
  • 1241 Refactor Dockerfile and Makefile.
  • Documentation improvements: 1320, 1326, and 1377.

Fixes

  • 1457 Wait for caches to sync when the Ingress Controller starts.
  • 1444 Fix setting host header in action proxy in VirtualServer/VirtualServerRoute.
  • 1396 Fix reload timeout calculation for verifying NGINX reloads.

Dependencies

Known Issues

  • 1448 When an Ingress Controller pod starts, it can report warnings about missing secrets for Ingress and other resources that reference secrets. Those warnings are intermittent - once the Ingress Controller fully processes the resources of the cluster, it will clear the warnings. Only after that, the Ingress Controller will become ready to accept client traffic - its readiness probe will succeed.

Upgrade

  • For NGINX, use the 1.11.0 image from our DockerHub: nginx/nginx-ingress:1.11.0, nginx/nginx-ingress:1.11.0-alpine or nginx-ingress:1.11.0-ubi
  • For NGINX Plus, please build your own image using the 1.11.0 source code.
  • For Helm, use version 0.9.0 of the chart.
  • 1241 improved the Makefile. As a result, the commands for building the Ingress Controller image were changed. See the updated commands here.
  • 1241 also consolidated all Dockerfiles into a single Dockerfile. If you customized any of the Dockerfiles, make sure to port the changes to the new Dockerfile.
  • 1288 further improved validation of Ingress annotations. See this document to learn more about which annotations are validated. Note that the Ingress Controller will reject resources with invalid annotations, which means clients will see 404 responses from NGINX. Before upgrading, ensure the Ingress resources don’t have annotations with invalid values. Otherwise, after the upgrade, the Ingress Controller will reject such resources.
  • 1457 fixed the bug when an Ingress Controller pod could become ready before it generated the configuration for all relevant resources in the cluster. The fix also requires that the Ingress Controller can successfully list the relevant resources from the Kubernetes API. For example, if the -enable-custom-resources cli argument is true (which is the default), the VirtualServer, VirtualServerRoute, TransportServer, and Policy CRDs must be created in the cluster, so that the Ingress Controller can list them. This is similar to other custom resources – see the list here. Thus, before upgrading, make sure that the CRDs are created in the cluster. Otherwise, the Ingress Controller pods will not become ready.

Supported Platforms

We will provide technical support for NGINX Ingress Controller on any Kubernetes platform that is currently supported by its provider and which passes the Kubernetes conformance tests. This release was fully tested on the following Kubernetes versions: 1.16-1.20.


1.10.1

16 March 2021

Dependencies

  • Update NGINX version to 1.19.8.
  • Add Kubernetes 1.20 support.

Fixes

  • 1373, 1439, 1440: Fix various issues in the Makefile. In 1.10.0, a bug was introduced that prevented building Ingress Controller images on versions of make < 4.1.

Upgrade

  • For NGINX, use the 1.10.1 image from our DockerHub: nginx/nginx-ingress:1.10.1, nginx/nginx-ingress:1.10.1-alpine or nginx/nginx-ingress:1.10.1-ubi
  • For NGINX Plus, please build your own image using the 1.10.1 source code.
  • For Helm, use version 0.8.1 of the chart.

1.10.0

26 January 2021

Overview

Release 1.10.0 includes:

  • Open ID Connect authentication policy.
  • Improved handling of Secret resources with extended validation and error reporting.
  • Improved visibility with Prometheus metrics for the configuration workqueue and the ability to annotate NGINX logs with the metadata of Kubernetes resources.
  • NGINX App Protect User-Defined signatures support.
  • Improved validation of Ingress annotations.

You will find the complete changelog for release 1.10.0, including bug fixes, improvements, and changes below.

Features

  • 1304 Add Open ID Connect policy.
  • 1281 Add support for App Protect User Defined Signatures.
  • 1266 Add workqueue metrics to Prometheus metrics.
  • 1233 Annotate tcp metrics with k8s object labels.
  • 1231 Support k8s objects variables in log format.
  • 1270 and 1277 Improve validation of Ingress annotations.
  • 1265 Report warnings for misconfigured TLS and JWK secrets.
  • 1262 Use setcap(8) only once. 1263 Use chown(8) only once. 1264 Use mkdir(1) only once. Thanks to Sergey A. Osokin.
  • 1256 and 1260 Improve handling of secret resources.
  • 1240 Validate TLS and CA secrets.
  • 1235 Use buildkit secret flag for NGINX plus images.
  • 1290 Graduate policy resource and accessControl policy to generally available.
  • 1225 Require secrets to have types.
  • 1237 Deprecate support for helm2 clients.
  • Documentation improvements: 1282, 1293, 1303, 1315.

Helm Chart

  • 1290 Add new preview policies parameter to chart. controller.enablePreviewPolicies was added.
  • 1232 Replace deprecated imagePullSecrets helm setting. controller.serviceAccount.imagePullSecrets was removed. controller.serviceAccount.imagePullSecretName was added.
  • 1228 Fix installation of ingressclass on Kubernetes versions v1.18.x-*

Dependencies

  • 1299 Update NGINX App Protect version to 2.3 and debian distribution to debian:buster-slim.
  • 1291 Update NGINX OSS to 1.19.6. Update NGINX Plus to R23.

Upgrade

  • For NGINX, use the 1.10.0 image from our DockerHub: nginx/nginx-ingress:1.10.0, nginx/nginx-ingress:1.10.0-alpine or nginx-ingress:1.10.0-ubi

  • For NGINX Plus, please build your own image using the 1.10.0 source code.

  • For Helm, use version 0.8.0 of the chart.

  • As a result of 1270 and 1277, the Ingress Controller improved validation of Ingress annotations: more annotations are validated and validation errors are reported via events for Ingress resources. Additionally, the default behavior for invalid annotation values was changed: instead of using the default values, the Ingress Controller will reject a resource with an invalid annotation value, which will make clients see 404 responses from NGINX. See this document to learn more. Before upgrading, ensure the Ingress resources don’t have annotations with invalid values. Otherwise, after the upgrade, the Ingress Controller will reject such resources.

  • In 1232 controller.serviceAccount.imagePullSecrets was removed. Use the new controller.serviceAccount.imagePullSecretName instead.

  • The Policy resource was promoted to v1. If you used the alpha1 version, the policies are needed to be recreated with the v1 version. Before upgrading the Ingress Controller, run the following command to remove the alpha1 policies CRD (that will also remove all existing alpha1 policies):

     kubectl delete crd policies.k8s.nginx.org
    

    As part of the upgrade, make sure to create the v1 policies CRD. See the corresponding instructions for the manifests and Helm installations.

    Also note that all policies except for accessControl are still in preview. To enable them, run the Ingress Controller with - -enable-preview-policies command-line argument (controller.enablePreviewPolicies Helm parameter).

  • It is necessary to update secret resources. See the section UPDATING SECRETS below.

UPDATING SECRETS:

In 1225, as part of improving how the Ingress Controller handles secret resources, we added a requirement for secrets to be of one of the following types:

  • kubernetes.io/tls for TLS secrets.
  • nginx.org/jwk for JWK secrets.
  • nginx.org/ca for CA secrets.

The Ingress Controller now ignores secrets that are not of a supported type. As a consequence, special upgrade steps are required.

Before upgrading, ensure that the secrets referenced in Ingress, VirtualServer or Policies resources are of a supported type, which is configured via the type field. Because that field is immutable, it is necessary to either:

  • Recreate the secrets. Note that in this case, the client traffic for the affected resources will be rejected for the period during which a secret doesn’t exist in the cluster.
  • Create copies of the secrets and update the affected resources to reference the copies. The copies need to be of a supported type. In contrast with the previous options, this will not make NGINX reject the client traffic.

It is also necessary to update the default server secret and the wildcard secret (if it was configured) in case their type is not kubernetes.io/tls. The steps depend on how you installed the Ingress Controller: via manifests or Helm. Performing the steps will not lead to a disruption of the client traffic, as the Ingress Controller retains the default and wildcard secrets if they are removed.

For manifests installation:

  1. Recreate the default server secret and the wildcard secret with the type kubernetes.io/tls.
  2. Upgrade the Ingress Controller.

For Helm installation, there two cases:

  1. If Helm created the secrets (you configured controller.defaultTLS.cert and controller.defaultTLS.key for the default secret and controller.wildcardTLS.cert and controller.wildcardTLS.key for the wildcard secret), then no special upgrade steps are required: during the upgrade, the Helm will remove the existing default and wildcard secrets and create new ones with different names with the type kubernetes.io/tls.
  2. If you created the secrets separately from Helm (you configured controller.defaultTLS.secret for the default secret and controller.wildcardTLS.secret for the wildcard secret):
    1. Recreate the secrets with the type kubernetes.io/tls.
    2. Upgrade to the new Helm release.

NOTES:


1.9.1

23 November 2020

Dependencies

  • Update the base ubi images to 8.3.

Features

  • Fix deployment of ingressclass resource via helm on some versions of Kubernetes.
  • Renew CA cert for egress-mtls example.
  • Add imagePullSecretName support to helm chart.

Upgrade

  • For NGINX, use the 1.9.1 image from our DockerHub: nginx/nginx-ingress:1.9.1, nginx/nginx-ingress:1.9.1-alpine or nginx/nginx-ingress:1.9.1-ubi
  • For NGINX Plus, please build your own image using the 1.9.1 source code.
  • For Helm, use version 0.7.1 of the chart.

1.9.0

20 October 2020

Overview

Release 1.9.0 includes:

  • Support for new Prometheus metrics and enhancements of the existing ones, including configuration reload reason, NGINX worker processes count, upstream latency, and more.
  • Support for rate limiting, JWT authentication, ingress(client) and egress(upstream) mutual TLS via the Policy resource.
  • Support for the latest Ingress resource features and the IngressClass resource.
  • Support for NGINX Service Mesh.

You will find the complete changelog for release 1.9.0, including bug fixes, improvements, and changes below.

Features

  • 1180 Add support for EgressMTLS.
  • 1166 Add IngressMTLS policy support.
  • 1154 Add JWT policy support.
  • 1120 Add RateLimit policy support.
  • 1058 Support policies in VS routes and VSR subroutes.
  • 1147 Add option to specify other log destinations in AppProtect.
  • 1131 Update packages and CRDs to AppProtect 2.0. This update includes features such as: JSON Schema Validation, User-Defined URLs and User-Defined Parameters. See the release notes for a complete feature list.
  • 1100 Add external references to AppProtect.
  • 1085 Add installation of threat campaigns package.
  • 1133 Add support for IngressClass resources.
  • 1130 Add prometheus latency collector.
  • 1076 Add prometheus worker process metrics.
  • 1075 Add support for NGINX Service Mesh internal routes.
  • 1178 Resolve host collisions in VirtualServer and Ingresses.
  • 1158 Support variables in action proxy headers.
  • 1137 Add pod_owner label to metrics when -spire-agent-address is set.
  • 1107 Extend Upstream Servers with pod_name label.
  • 1099 Add reason label to total_reload metrics.
  • 1088 Extend Upstream Servers and Server Zones metrics, thanks to Raúl.
  • 1080 Support pathType field in the Ingress resource.
  • 1078 Remove trailing blank lines in vs/vsr snippets.
  • Documentation improvements: 1083, 1092, 1089, 1174, 1175, 1171.

Fixes

  • 1179 Fix TransportServers in debian AppProtect image.
  • 1129 Support real-ip in default server.
  • 1110 Add missing threat campaigns key to AppProtect CRD.

Helm Chart

  • 1105 Fix GlobalConfiguration support in helm chart.
  • Add new parameters to the Chart: controller.setAsDefaultIngress, controller.enableLatencyMetrics. Added in 1133 and 1148.

Dependencies

  • 1182 Update NGINX version to 1.19.3.

Upgrade

  • For NGINX, use the 1.9.0 image from our DockerHub: nginx/nginx-ingress:1.9.0, nginx/nginx-ingress:1.9.0-alpine or nginx-ingress:1.9.0-ubi
  • For NGINX Plus, please build your own image using the 1.9.0 source code.
  • For Helm, use version 0.7.0 of the chart.

For Kubernetes >= 1.18, when upgrading using the manifests, make sure to update the ClusterRole and create the IngressClass resource, which is required for Kubernetes >= 1.18. Otherwise, the Ingress Controller will fail to start. If you run multiple NGINX Ingress Controllers in the cluster, each Ingress Controller must have its own IngressClass resource. As the -use-ingress-class-only argument is now ignored (see NOTES), make sure your Ingress resources have the ingressClassName field or the kubernetes.io/ingress.class annotation set to the name of the IngressClass resource. Otherwise, the Ingress Controller will ignore them.

Helm Upgrade

  • If you’re using custom resources like VirtualServer and TransportServer (controller.enableCustomResources is set to true), after you run the helm upgrade command, the CRDs will not be upgraded. After running the helm upgrade command, run kubectl apply -f deployments/helm-chart/crds to upgrade the CRDs.
  • For Kubernetes >= 1.18, a dedicated IngressClass resource, which is configured by controller.ingressClass, is required per helm release. Ensure controller.ingressClass is not set to the name of the IngressClass of other releases or Ingress Controllers. As the controller.useIngressClassOnly parameter is now ignored (see NOTES), make sure your Ingress resources have the ingressClassName field or the kubernetes.io/ingress.class annotation set to the value of controller.ingressClass. Otherwise, the Ingress Controller will ignore them.

Notes

  • When using Kubernetes >= 1.18, the -use-ingress-class-only command-line argument is now ignored, and the Ingress Controller will only process resources that belong to its class. See IngressClass doc for more details.
  • For Kubernetes >= 1.18, a dedicated IngressClass resource, which is configured by controller.ingressClass, is required per helm release. When upgrading or installing releases, ensure controller.ingressClass is not set to the name of the IngressClass of other releases or Ingress Controllers.

1.8.1

14 August 2020

Dependencies

  • Update NGINX version to 1.19.2.

Upgrade

  • For NGINX, use the 1.8.1 image from our DockerHub: nginx/nginx-ingress:1.8.1, nginx/nginx-ingress:1.8.1-alpine or nginx/nginx-ingress:1.8.1-ubi
  • For NGINX Plus, please build your own image using the 1.8.1 source code.
  • For Helm, use version 0.6.1 of the chart.

1.8.0

22 July 2020

Overview

Release 1.8.0 includes:

  • Support for NGINX App Protect Web Application Firewall.
  • Support for configuration snippets and custom template for VirtualServer and VirtualServerRoute resources.
  • Support for request/response header manipulation and request URI rewriting for VirtualServer/VirtualServerRoute.
  • Introducing a new configuration resource - Policy - with the first policy for IP-based access control.

You will find the complete changelog for release 1.8.0, including bug fixes, improvements, and changes below.

FEATURES FOR VIRTUALSERVER AND VIRTUALSERVERROUTE RESOURCES:

  • 1036: Add VirtualServer custom template support.
  • 1028: Add access control policy.
  • 1019: Add VirtualServer/VirtualServerRoute snippets support.
  • 1006: Add request/response modifiers to VS and VSR.
  • 994: Support Class Field in VS/VSR.
  • 973: Add status to VirtualServer and VirtualServerRoute.

Features

  • 1035: Support for App Protect module.
  • 1029: Add readiness endpoint.
  • 995: Emit event for orphaned VirtualServerRoutes.
  • 1029: Add readiness endpoint. The Ingress Controller now exposes a readiness endpoint on port 8081 and the path /nginx-ready. The endpoint returns a 200 response after the Ingress Controller finishes the initial configuration of NGINX at the start. The pod template was updated to use that endpoint in a readiness probe.
  • 980: Enable leader election by default.
  • Documentation improvements: 946 thanks to 谭九鼎, 948, 972, 965.

Fixes

  • 1030: Fix port range validation in cli arguments.
  • 953: Fix error logging of master/minion ingresses.

Helm Chart

  • Add new parameters to the Chart: controller.appprotect.enable, controller.globalConfiguration.create, controller.globalConfiguration.spec, controller.readyStatus.enable, controller.readyStatus.port, controller.config.annotations, controller.reportIngressStatus.annotations. Added in 1035, 1034, 1029, 1003 thanks to RubyLangdon.
  • 1047 and 1009: Change how Helm manages the custom resource definitions (CRDs) to support installing multiple Ingress Controller releases. Note: If you’re using the custom resources (controller.enableCustomResources is set to true), this is a breaking change. See the HELM UPGRADE section below for the upgrade instructions.

Dependencies

  • Update NGINX version to 1.19.1.
  • Update NGINX Plus to R22.

Upgrade

  • For NGINX, use the 1.8.0 image from our DockerHub: nginx/nginx-ingress:1.8.0, nginx/nginx-ingress:1.8.0-alpine or nginx-ingress:1.8.0-ubi
  • For NGINX Plus, please build your own image using the 1.8.0 source code.
  • For Helm, use version 0.6.0 of the chart.

Helm Upgrade

If you’re using custom resources like VirtualServer and TransportServer (controller.enableCustomResources is set to true), after you run the helm upgrade command, the CRDs and the corresponding custom resources will be removed from the cluster. Before upgrading, make sure to back up the custom resources. After running the helm upgrade command, run kubectl apply -f deployments/helm-chart/crds to re-install the CRDs and then restore the custom resources.

Notes

  • As part of installing a release, Helm will install the CRDs unless that step is disabled (see the corresponding doc). The installed CRDs include the CRDs for all Ingress Controller features, including the ones disabled by default (like App Protect with aplogconfs.appprotect.f5.com and appolicies.appprotect.f5.com CRDs).

1.7.2

23 June 2020

Dependencies

  • Update NGINX Plus version to R22.

Upgrade

  • For NGINX, use the 1.7.2 image from our DockerHub: nginx/nginx-ingress:1.7.2, nginx/nginx-ingress:1.7.2-alpine or nginx/nginx-ingress:1.7.2-ubi
  • For NGINX Plus, please build your own image using the 1.7.2 source code.
  • For Helm, use version 0.5.2 of the chart.

1.7.1

4 June 2020

Dependencies

  • Update NGINX version to 1.19.0.

Upgrade

  • For NGINX, use the 1.7.1 image from our DockerHub: nginx/nginx-ingress:1.7.1, nginx/nginx-ingress:1.7.1-alpine or nginx/nginx-ingress:1.7.1-ubi
  • For NGINX Plus, please build your own image using the 1.7.1 source code.
  • For Helm, use version 0.5.1 of the chart.

1.7.0

30 April 2020

Overview

Release 1.7.0 includes:

  • Support for TCP, UDP, and TLS Passthrough load balancing with the new configuration resources: TransportServer and GlobalConfiguration. The resources allow users to deliver complex, non-HTTP-based applications from Kubernetes using NGINX Ingress Controller.
  • Support for error pages in VirtualServer and VirtualServerRoute resources. A user can now specify custom error responses for errors returned by backend applications or generated by NGINX, such as a 502 response.
  • Improved validation of VirtualServer and VirtualServerRoute resources. kubectl and the Kubernetes API server can now detect violations of the structure of VirtualServer/VirtualServerRoute resources and return an error.
  • Support for an operator which manages the lifecycle of the Ingress Controller on Kubernetes or OpenShift. See the NGINX Ingress Operator GitHub repo.

See the 1.7.0 release announcement blog post, which includes an overview of each feature.

You will find the complete changelog for release 1.7.0, including bug fixes, improvements, and changes below.

FEATURES FOR VIRTUALSERVER AND VIRTUALSERVERROUTE RESOURCES:

  • 868: Add OpenAPI CRD schema validation.
  • 847: Add support for error pages for VS/VSR.

Features

  • 902: Add TransportServer and GlobalConfiguration Resources.
  • 894: Add Dockerfile for NGINX Open Source for Openshift.
  • 857: Add Openshift Dockerfile for NGINX Plus.
  • 852: Add default-server-access-log-off to configmap.
  • 845: Add log-format-escaping and stream-log-format-escaping configmap keys. Thanks to Alexey Maslov.
  • 827: Add ingress class label to all Prometheus metrics.
  • 850: Extend redirect URI validation with protocol check in VS/VSR.
  • 832: Update the examples to run the nginxdemos/nginx-hello:plain-text image, that doesn’t require root user.
  • 825: Add multi-stage docker builds.
  • 852: Add default-server-access-log-off to configmap. The access logs for the default server are now enabled by default.
  • 847: Add support for error pages for VS/VSR. The PR affects how the Ingress Controller generates configuration for VirtualServer and VirtualServerRoutes. See this comment for more details.
  • 827: Add ingress class label to all Prometheus metrics. Every Prometheus metric exposed by the Ingress Controller now includes the label class with the value of the Ingress Controller class (by default nginx),
  • 825: Add multi-stage docker builds. When building the Ingress Controller image in Docker, we now use a multi-stage docker build.

Fixes

  • 828: Fix error messages for actions of the type return.

Helm Chart

  • Add new parameters to the Chart: controller.enableTLSPassthrough, controller.volumes, controller.volumeMounts, controller.priorityClassName. Added in 921, 878, 807 thanks to Greg Snow.

Dependencies

  • Update NGINX version to 1.17.10.
  • Update NGINX Plus to R21.
  • 854: Update the Debian base images for NGINX Plus to debian:buster-slim.

Upgrade

  • For NGINX, use the 1.7.0 image from our DockerHub: nginx/nginx-ingress:1.7.0, nginx/nginx-ingress:1.7.0-alpine or nginx-ingress:1.7.0-ubi
  • For NGINX Plus, please build your own image using the 1.7.0 source code.
  • For Helm, use version 0.5.0 of the chart.

When upgrading using the manifests, make sure to deploy the new TransportServer CRD (common/ts-definition.yaml), as it is required by the Ingress Controller. Otherwise, you will get error messages in the Ingress Controller logs.


1.6.3

6 March 2020

Dependencies

  • Update NGINX version to 1.17.9.

Upgrade

  • For NGINX, use the 1.6.3 image from our DockerHub: nginx/nginx-ingress:1.6.3 or nginx/nginx-ingress:1.6.3-alpine
  • For NGINX Plus, please build your own image using the 1.6.3 source code.
  • For Helm, use version 0.4.3 of the chart.

1.6.2

6 February 2020

Dependencies

  • Update NGINX version to 1.17.8.

Upgrade

  • For NGINX, use the 1.6.2 image from our DockerHub: nginx/nginx-ingress:1.6.2 or nginx/nginx-ingress:1.6.2-alpine
  • For NGINX Plus, please build your own image using the 1.6.2 source code.
  • For Helm, use version 0.4.2 of the chart.

1.6.1

14 January 2020

Dependencies

  • Update NGINX version to 1.17.7.

Upgrade

  • For NGINX, use the 1.6.1 image from our DockerHub: nginx/nginx-ingress:1.6.1 or nginx/nginx-ingress:1.6.1-alpine
  • For NGINX Plus, please build your own image using the 1.6.1 source code.
  • For Helm, use version 0.4.1 of the chart.

1.6.0

19 December 2019

Overview

Release 1.6.0 includes:

  • Improvements to VirtualServer and VirtualServerRoute resources, adding support for richer load balancing behavior, more sophisticated request routing, redirects, direct responses, and blue-green and circuit breaker patterns. The VirtualServer and VirtualServerRoute resources are enabled by default and are ready for production use.
  • Support for OpenTracing, helping you to monitor and debug complex transactions.
  • An improved security posture, with support to run the Ingress Controller as a non-root user.

The release announcement blog post includes the overview for each feature. See https://www.nginx.com/blog/announcing-nginx-ingress-controller-for-kubernetes-release-1-6-0/

You will find the complete changelog for release 1.6.0, including bug fixes, improvements, and changes below.

FEATURES FOR VIRTUALSERVER AND VIRTUALSERVERROUTE RESOURCES:

  • 780: Add support for canned responses to VS/VSR.
  • 778: Add redirect support in VS/VSR.
  • 766: Add exact matches and regex support to location paths in VS/VSR.
  • 748: Add TLS redirect support in Virtualserver.
  • 745: Improve routing rules in VS/VSR
  • 728: Add session persistence in VS/VSR.
  • 724: Add VS/VSR Prometheus metrics.
  • 712: Add service subselector support in vs/vsr.
  • 707: Emit warning events in VS/VSR.
  • 701: Add support queue in upstreams for plus in VS/VSR.
  • 693: Add ServerStatusZones support in vs/vsr.
  • 670: Add buffering support for vs/vsr.
  • 660: Add ClientBodyMaxSize support in vs/vsr.
  • 659: Support configuring upstream zone sizes in VS/VSR.
  • 655: Add slow-start support in vs/vsr.
  • 653: Add websockets support for vs/vsr upstreams.
  • 641: Add support for ExternalName Services for vs/vsr.
  • 635: Add HealthChecks support for vs/vsr.
  • 634: Add Active Connections support to vs/vsr.
  • 628: Add retries support for vs/vsr.
  • 621: Add TLS support for vs/vsr upstreams.
  • 617: Add keepalive support to vs/vsr.
  • 612: Add timeouts support to vs/vsr.
  • 607: Add fail-timeout and max-fails support to vs/vsr.
  • 596: Add lb-method support in vs and vsr.

Features

  • 750: Add support for health status uri customisation.
  • 691: Helper Functions for custom annotations.
  • 631: Add max_conns support for NGINX plus.
  • 629: Added upstream zone directive annotation. Thanks to Victor Regalado.
  • 616: Add proxy-send-timeout to configmap key and annotation.
  • 615: Add support for Opentracing.
  • 614: Add max-conns annotation. Thanks to Victor Regalado.
  • 678: Increase defaults for server-names-hash-max-size and servers-names-hash-bucket-size ConfigMap keys.
  • 694: Reject VS/VSR resources with enabled plus features for OSS.
  • 799: Enable CRDs by default. VirtualServer and VirtualServerRoute resources are now enabled by default.
  • 772: Update VS/VSR version from v1alpha1 to v1. Make sure to update the apiVersion of your VirtualServer and VirtualServerRoute resources.
  • 748: Add TLS redirect support in VirtualServer. The redirect-to-https and ssl-redirect ConfigMap keys no longer have any effect on generated configs for VirtualServer resources.
  • 745: Improve routing rules. Update the spec of VirtualServer and VirtualServerRoute accordingly. See YAML examples of the changes here.
  • 710: Run IC as non-root. Make sure to use the updated manifests to install/upgrade the Ingress Controller.
  • 603: Update apiVersion in Deployments and DaemonSets to apps/v1.
  • Documentation improvements: 713 thanks to Matthew Wahner.

Fixes

  • 788: Fix VSR updates when namespace is set implicitly.
  • 736: Init Ingress labeled metrics on start.
  • 686: Check if config map created for leader-election.
  • 664: Fix reporting events for Ingress minions.
  • 632: Fix hsts support when not using SSL. Thanks to Martín Fernández.

Helm Chart

  • Add new parameters to the Chart: controller.healthCheckURI, controller.resources, controller.logLevel, controller.customPorts, controller.service.customPorts. Added in 750, 636 thanks to Guilherme Oki, 600, 581 thanks to Alex Meijer.
  • 722: Fix trailing leader election cm when using helm. This change might lead to a failed upgrade. See the helm upgrade instruction below.
  • 573: Use Controller name value for app selectors.

Dependencies

  • Update NGINX versions to 1.17.6.
  • Update NGINX Plus version to R20.

Upgrade

  • For NGINX, use the 1.6.0 image from our DockerHub: nginx/nginx-ingress:1.6.0 or nginx/nginx-ingress:1.6.0-alpine
  • For NGINX Plus, please build your own image using the 1.6.0 source code.
  • For Helm, use version 0.4.0 of the chart.

Helm upgrade

If leader election (the controller.reportIngressStatus.enableLeaderElection parameter) is enabled, when upgrading to the new version of the Helm chart:

  1. Make sure to specify a new ConfigMap lock name (controller.reportIngressStatus.leaderElectionLockName) different from the one that was created by the current version. To find out the current name, check ConfigMap resources in the namespace where the Ingress Controller is running.
  2. After the upgrade, delete the old ConfigMap.

Otherwise, the helm upgrade will not succeed.


Previous Releases

To see the previous releases, see the Releases page on the Ingress Controller GitHub repo.


Last modified November 19, 2024