Known Issues

This document is a summary of the known issues in NGINX Management Suite API Connectivity Manager. Fixed issues are removed after 45 days.

We recommend upgrading to the latest version of API Connectivity Manager to take advantage of new features, improvements, and bug fixes.


1.4.0

Cluster and Environment deletion issues when Portal Docs are published

Issue ID Status
40163 Fixed in 1.4.1

Description

When a developer portal proxy is hosting API documentation, the infrastructure admin is, in some cases, unable to delete clusters in other unrelated Environments and therefore, unable to delete those same Environments.


The Proxy Cluster API isn’t ready to be used

Issue ID Status
40097 Open

Description

The API Connectivity Manager API documentation has inadvertently released details of Proxy Cluster endpoints and related policies before their public launch. Consequently, the following Proxy Cluster endpoints and global policies should not be used yet.

The following Proxy Cluster endpoints are not ready for use:

  • /infrastructure/workspaces/{workspaceName}/proxy-clusters
  • /infrastructure/workspaces/{workspaceName}/proxy-clusters/{name}

The following global policies are not yet ready for use:

  • cluster-zone-sync
  • cluster-wide-config

A later version of the release notes will inform you when these endpoints and policies are ready.


Configurations aren’t pushed to newly onboarded instances if another instance is offline

Issue ID Status
40035 Open

Description

When a new instance is onboarded, it will not be configured if any other instances are offline.

Workaround

After onboarding the instance as usual, push the existing configuration again to the new instance, without making any changes.


1.3.0

OIDC policy cannot be applied alongside a proxy authentication policy

Issue ID Status
39604 Fixed in 1.4.0

Description

It is not possible to use both an OpenID Connect (OIDC) policy and a proxy authentication policy concurrently.


The web interface doesn’t pass the enableSNI property for the TLS backend policy

Issue ID Status
39445 Fixed in 1.3.1

Description

When configuring a TLS backend policy in the web interface, the new enableSNI property does not match the value of the deprecated proxyServerName property, resulting in an API error. The enableSNI value must be the same as proxyServerName value.

Workaround

Use the NGINX Management Suite API Connectivity Manager REST API to send a PUT request to the following endpoint, providing the correct values for enableSNI and proxyServerName. Both values must match.

Method Endpoint
PUT /infrastructure/workspaces/{{infraWorkspaceName}}/environments/{{environmentName}}


The Inbound TLS policy breaks when upgrading from API Connectivity Manager 1.2.0 to 1.3.0.

Issue ID Status
39426 Fixed in 1.3.1

Description

The Inbound TLS policy for gateway clusters may break and cause an internal server error following an upgrade from API Connectivity Manager 1.2.0 to 1.3.0. Errors similar to the following example are logged:

cannot unmarshal string into Go struct field RuntimePolicies.Proxies.policies.tls-inbound of type runtime_policies.CACert.

Workaround

Upgrade API Connectivity Manager to 1.3.1 to resolve this issue.


A JWT token present in a query parameter is not proxied to the backend for advanced routes

Issue ID Status
39328 Open

Description

When using JWT authentication with advanced routes, a JWT token that is provided as a query parameter will not be proxied to the backend service.

Workaround

Pass the JWT token as a header instead of providing the JWT token as a query parameter.


1.2.0

Developer Portal backend information is unintentionally updated when editing clusters within an environment

Issue ID Status
39409 Open

Description

The Developer Portal backend information may be inadvertently updated in the following circumstances:

  1. If you have multiple Developer Portal clusters and update the backend information (for example, enable TLS or change the host or port, etc. ) for any of those clusters, the update is applied to all of the clusters.

  2. If you have one or more Developer Portal clusters and update any other cluster in the environment (for example, the API Gateway or Developer Portal Internal cluster), the backend settings for the Developer Clusters are reset to their defaults (127.0.0.1/8080/no TSL).

Workaround

  • Workaround for scenario #1

    Use the NGINX Management Suite API Connectivity Manager REST API to send a PUT request to the following endpoint with the correct backend settings for each Developer Portal cluster:

    Method Endpoint
    PUT /infrastructure/workspaces/{{infraWorkspaceName}}/environments/{{environmentName}}

  • Workaround for scenario #2

    If you have just one Developer Portal cluster, you can use the web interface to update the backend settings for the cluster if you’re not using the default settings.

    If you have more than one Developer Portal cluster, use the NGINX Management Suite API Connectivity Manager REST API to send a PUT request to the following endpoint with the correct backend settings for each cluster:

    Method Endpoint
    PUT /infrastructure/workspaces/{{infraWorkspaceName}}/environments/{{environmentName}}


The user interface is erroneously including irrelevant information on the TLS inbound policy workflow

Issue ID Status
38046 Open

Description

On the TLS inbound policy, toggling Enable Client Verification On/Off results in the user interface adding irrelevant information that causes the publish to fail due to validation error.

Workaround

Dismiss the policy without saving and restart the UI workflow to add the TLS inbound policy.


Portals secured with TLS policy require additional environment configuration prior to publishing API docs

Issue ID Status
38028 Fixed in 1.3.0

Description

When the tls-backend policy is applied on a developer portal cluster, the communication between the portal UI and portal backend service is secured. By default, when the portal cluster is created, and the backend is not explicitly specified in the payload, it defaults to HTTP. Adding the tls-backend policy does not automatically upgrade the protocol to HTTPS. If the protocol is not set to HTTPS, publishing API docs to the portal will fail. The user has to explicitly change the backend protocol to HTTPS.

Workaround

In the user interface, navigate to Workspace > Environment > Developer Portal Clusters > Edit Advanced Config. Select “edit the Backend” and toggle the Enable TLS switch to enabled.


A proxy deployed with a specRef field (OAS) and basePathVersionAPpedRule set to other than NONE may cause versions to appear twice in the deployed location block

Issue ID Status
36666 Open

Description

If you add an API doc and reference it with the specRef field in the proxy object, the OAS (API doc) is used as the source of truth for the base path. If the OAS (API doc) contains the full correct base path, and you use any basePathVersionAppendRule value other than NONE, the base path will be corrupted by appending/prepending the version in the deployment (e.g. /api/v3/v3).

Workaround

If you are using an API doc with a proxy:

  1. Put the entire true base path of the API in the server section of the API doc:

    Servers:
    - url: https://(API-address)/api/v3
    

    or

    Servers:
    - url: /api/v3
    
    Note:
    In the example above only /api/v3 is relevant for this issue, and it should be the full base path to which the individual paths in the API document can be appended directly.
  2. Set the value of the base path version append rule (basePathVersionAppendRule) in the proxy to NONE.


New users are unable to see pages even though they have been given access.

Issue ID Status
36607 Fixed in 1.3.0

Description

A newly created role needs a minimum of READ access on the LICENSING feature. Without this, the users will not have access to the pages even though they have been granted permission. They will see 403 errors surfacing as license errors while accessing the pages.

Workaround

Assign a minimum of READ access on the LICENSING feature to all new roles.


1.1.0

Advanced routing ignores the Context Root setting for backend proxies

Issue ID Status
36775 Fixed in 1.1.1

Description

Advanced routing ignores the Context Root setting for backend proxies. The Context Root is not prefixed as expected before proxying, which may cause the request to be incorrectly routed.


To see updates to the Listener’s table, forced refresh of the cluster details page is required

Issue ID Status
36540 Fixed in 1.2.0

Description

When trying to update the Advance Config for Environment cluster, changes are not reflected on the cluster details page after saving and submitting successfully.

Workaround

Refresh or reload the browser page to see changes on the cluster details page.


Using labels to specify the backend is partially available

Issue ID Status
36317 Fixed in 1.2.0

Description

targetBackendServiceLabel label cannot be updated using the web interface. It is not configurable at the URI level in the spec.

Workaround

targetBackendServiceLabel label can be updated by sending a PUT command to the API.


Rate limit policy cannot be applied with OAuth2 JWT Assertion policy

Issue ID Status
36095 Fixed in 1.2.0

Description

Rate limit policy cannot be applied with the OAuth2 JWT assertion policy.


Unable to delete an environment that is stuck in a Configuring state

Issue ID Status
35546 Open

Description

In the web interface, after deleting all of the proxy clusters in an environment that’s in a FAIL state, the environment may transition to a CONFIGURING state and cannot be deleted.

Workaround

Add back the deleted proxy clusters using the web interface. The environment will transition to a Fail state. At this point, you can use the API to delete the proxy by sending a DELETE request to:

https://<NMS-FQDN>/api/acm/v1/infrastructure/workspaces/<infra-workspace-name>/environments/<environmentname>

Enums are not supported in Advanced Routing

Issue ID Status
34854 Fixed in 1.2.0

Description

Enums cannot be set for path or query parameters while applying advanced routing. A list of specific values cannot be specified for their advanced routing parameters.


1.0.0

API Connectivity Manager module won’t load if the Security Monitoring module is enabled

Issue ID Status
39943 Fixed in Instance Manager 2.8.0

Description

If you have Instance Manager 2.7 or earlier installed and attempt to enable both the API Connectivity Manager (ACM) and Security Monitoring (SM) modules on the same NGINX Management Suite management plane, the ACM module will not load because of incompatibility issues with the SM module.

Workaround

Before enabling the ACM and SM modules, ensure that your Instance Manager is upgraded to version 2.8 or later. Be sure to read the release notes for each module carefully, as they may contain important information about version dependencies.

To see which version of Instance Manager you have installed, run the following command:

  • CentOS, RHEL, RPM-based:

    yum info nms-instance-manager
    
  • Debian, Ubuntu, Deb-based:

    dpkg -s nms-instance-manager
    

Traffic is not secured between the API Proxy and backend servers

The Enable TLS setting on the API Proxy backend servers is ignored.

Issue ID Status
36714 Fixed in 1.1.1

OIDC policy doesn’t work with Auth0 Identity Providers

The OpenID Connect(OIDC) policy does not work with Auth0 Identity Providers (IDP). The token exchange fails because the Accept-Encoding value isn’t sent as an explicit header value, which the Auth0 IDP requires.

Issue ID Status
36058 Fixed in 1.1.1

DEVPORTAL_OPTS in /etc/{default,sysconfig}/nginx-devportal does not work if value has multiple words

Issue ID Status
36040 Fixed in 1.1.0

Description

Passing command-line arguments to the nginx-devportal service on the Dev Portal backend server using the DEVPORTAL_OPTS variable in /etc/{default,sysconfig}/nginx-devportal doesn’t work if the value has more than one word in it; the service fails to start. The entire value is sent as a single command-line argument by systemd instead of being parsed into multiple arguments.

You can view the log errors by running the following command:

sudo journalctl -fu nginx-devportal

Workaround

Edit /etc/nginx-devportal/nginx-devportal.conf to configure your desired options instead of passing them as command-line arguments.


PATCH on API Proxies endpoint is not implemented

Issue ID Status
35771 Open

Description

The PATCH method for API proxies is listed in the API spec; however, this method hasn’t been implemented yet.

Workaround

Use PUT instead for API proxies.


Credentials endpoint is disabled by default

Issue ID Status
35630 Fixed in 1.2.0

Description

For security reasons, the Credentials endpoint is disabled by default. To use the Developer Portal credentials workflow, you need to make configuration changes on the ACM host to enable the Credentials endpoint. Optionally, communication between ACM and the Developer Portal can be secured by providing certificates.

Workaround

To enable the Credentials endpoint on the ACM host:

  1. Open an SSH connection into the ACM host and log in.

  2. Enable the Credentials endpoint:

    In /etc/nms/nginx/locations/nms-acm.conf, uncomment the location block.

    # Deployment of resource credentials from the devportal
    # Uncomment this block when using devportal. Authentication is disabled
    # for this location. This location block will mutually
    # verify the client trying to access the credentials API.
    location = /api/v1/devportal/credentials {
            # OIDC authentication (uncomment to disable)
            #auth_jwt off;
            auth_basic off;
            error_page 401 /401_certs.json;
            if ($ssl_client_verify != SUCCESS) {
              return 401;
            }
            proxy_pass http://apim-service/api/v1/devportal/credentials;
    }
    
  3. Save the changes.

  4. Reload NGINX on the ACM host:

    nginx -s reload
    

Unable to delete an environment that is stuck in a Configuring state

Issue ID Status
35546 Fixed in 1.2.0

Description

In the web interface, after deleting all of the proxy clusters is an environment that’s in a FAIL state, the environment may transition to a CONFIGURING state and cannot be deleted.

Workaround

Add back the deleted proxy clusters using the web interface. The environment will transition to a Fail state. At this point, you can use the API to delete the proxy by sending a DELETE request to:

https://<NMS-FQDN>/api/acm/v1/infrastructure/workspaces/<infra-workspace-name>/environments/<environmentname>

Features in the web interface are not displayed after uploading license

Issue ID Status
35525 Fixed in 1.1.0

Description

After uploading a valid ACM license, some features in the web interface are not displayed or remain restricted.

Workaround

Refresh the browser to load the updated permissions and show the missing features.


Cannot add, remove, or edit proxy clusters from an environment that has a published API proxy

Issue ID Status
35463 Fixed in 1.1.0

Description

When an environment has a published API proxy associated with it, existing proxy clusters cannot be changed. Additional proxy clusters cannot be added or removed.

Workaround

Unpublish the API proxy before adding, removing, or editing additional proxy clusters.


Environment is in a premature Success state even though all proxy clusters may not be on-boarded

Issue ID Status
35430 Fixed in 1.1.0

Description

In an environment where some, but not all, proxy clusters are on-boarded (that is, the NGINX Agent hasn’t been installed on the proxy cluster), the environment may report an invalid Success state.

Workaround

Install the NGINX Agent on the proxy cluster, then resubmit the environment.


JWT Assertion policy accepts an empty string value for tokenName property

Issue ID Status
35419 Fixed in 1.1.0

Description

The JWT Assertion policy accepts an empty value for the tokenName property, which may cause unexpected policy behavior.

Workaround

Include a valid tokenName value of at least three characters when adding the policy.


Installing NGINX Agent on Ubuntu 22.04 LTS fails with 404 Not Found error

Issue ID Status
35339 Open

Description

When installing the NGINX Agent on Ubuntu 22.04 LTS, the installation script fails with a 404 Not Found error similar to the following:

404 Not found [IP: <IP address>]
Reading package lists...
E: The repository 'https://192.0.2.0/packages-repository/deb/ubuntu jammy Release' does not have a Release file.
E: The repository 'https://pkgs.nginx.com/app-protect/ubuntu jammy Release' does not have a Release file.
E: The repository 'https://pkgs.nginx.com/app-protect-security-updates/ubuntu jammy Release' does not have a Release file.

Workaround

Edit the NGINX Agent install script to use the codename focal for Ubuntu 20.04.

  1. Download the installation script:

    curl -k https://<NGINX-INSTANCE-MANAGER-FQDN>/install/nginx-agent > install.sh
    
  2. Open the install.sh file for editing.

  3. Make the following changes:

    On lines 256-258, change the following:

    codename=$(cat /etc/*-release | grep '^DISTRIB_CODENAME' | 
    sed 's/^[^=]*=\([^=]*\)/\1/' | 
    tr '[:upper:]' '[:lower:]')
    

    to:

    codename=focal
    

    —OR—

    Alternatively, on line 454, change the following:

    deb ${PACKAGES_URL}/deb/${os}/ ${codename} agent
    

    to:

    deb ${PACKAGES_URL}/deb/${os}/ focal agent
    
  4. Save the changes.

  5. Run the install.sh script.


OIDC policy cannot be applied on a shared proxy cluster

Issue ID Status
35337 Open

Description

If the same proxy cluster is used for both the Developer Portal and API Gateway, the OIDC Policy is not applied.

Workaround

Within an environment, use separate proxy clusters for the Developer Portal and API Gateway when applying an OIDC policy.


OpenID Connect Discovery is not implemented

Issue ID Status
35186 Open

Description

The implementation to automatically fetch all the metadata from IDP’s well-known endpoint is incomplete. Though the option to specify the well-known endpoint exists in the OIDC policy, it is not functional. These endpoints have to be explicitly provided.

Workaround

Provide all the relevant endpoints – such as Keys, Authorize, Token, Logoff, and Userinfo – while configuring OIDC policy.


Error codes are not configurable for the OIDC policy

Issue ID Status
34900 Open

Description

Adding custom error codes in the OIDC policy causes a validation error similar to the following example:

duplicate location \"/_oidc_err_85de2f20_default_411\

Workaround

Use the default error codes included in the OIDC policy.


No validation when conflicting policies are added

Issue ID Status
34531 Fixed in 1.3.0

Description

When securing the API Proxy with policies like basic authentication or APIKey authentication, the user is not warned if a duplicate or conflicting policy is already added. Conflicting policies are not validated.

Workaround

Secure the API proxy with only one policy.


Multiple hostnames on a single proxy cluster are not supported

Issue ID Status
34457 Open

Description

The environment API allows an array of hostnames; however, this capability is not fully implemented.

Workaround

Use a single hostname per proxy cluster.


CORS policy doesn’t support proxying preflight requests to the backend when combined with an authentication policy

Issue ID Status
34449 Open

Description

On an API Proxy with an authentication policy, applying a CORS policy with preflightContinue=true is not supported.

Workaround

Apply CORS policy and set preflightContinue=false.