Frequently Asked Questions
Common questions about NGINX as a Service for Azure.
- Your NGINXaaS deployment resource is visible to you under your subscription. The underlying compute resources of your deployment, which are managed by NGINX on your behalf, are not visible in your subscription.
- NGINXaaS is deployed as an active-active pattern for high availability. To learn more, see the user guide.
- We are constantly adding support for new regions. You can find the updated list of supported regions in the NGINXaaS documentation.
My servers are located in different geographies, can NGINXaaS load balance for these upstream servers?
- Yes, NGINXaaS can load balance even if upstream servers are located in different geography as long as no networking limitations are mentioned in the Known Issues.
- NGINXaaS is integrated with Azure monitoring. NGINXaaS publishes traffic statistics in Azure monitoring. Customers can analyze the traffic statistics by following the steps mentioned in the NGINXaaS Monitoring documentation.
- Consider scaling out if the number of NCUs consumed is > 90% of NCUs requested. See the Scaling documentation to learn more.
- Consider scaling down if the number of NCUs consumed is < 80% of the NCUs requested. To learn more, see the NGINXaaS Scaling Guidance documentation.
In NGINX Plus, customers SSH into the NGINX Plus system, store their certificates in some kind of storage and configure the network and subnet to connect to NGINX Plus.
For NGINXaaS, customers store their certificates in the Azure key vault and configure NGINXaaS in the same VNet or peer to the VNet in which NGINXaaS is deployed.
You can monitor the NCUs consumed by looking at the metrics tab of NGINXaaS. To learn about the NCUs consumed, choose NGINXaaS statistics and select “NCU consumed.” If the NCU consumed is close to the requested NCUs, we encourage you to scale your system and increase the NCU units. You can manually scale from your base NCUs (For example, 20) to up to 160 NCUs by selecting the NGINXaaS scaling tab.
Currently, we support scaling in 10 NCU intervals (10, 20, 30, and so on).
See the Scaling Guidance documentation for more information.
NGINXaaS supports self-signed certificates, Domain Validated (DV) certificates, Organization Validated (OV) certificates, and Extended Validation (EV) certificates.
Currently, NGINXaaS supports PEM and PKCS12 format certificates.
See the SSL/TLS Certificates documentation to learn how to change certificates.
- Yes, NGINXaaS currently supports layer 4 TCP and HTTP layer 7 load balancing.
- No, NGINXaaS does not support IPv6 yet.
- At this time, we support following two protocols
- NGINXaaS supports one public or private IP per deployment. NGINXaaS doesn’t support a mix of public and private IPs at this time.
- You cannot change the IP address associated with an NGINXaaS deployment from public to private, or from private to public.
- The minimum size subnet is /27; however, we recommend a subnet size of /24.
- Yes, however, every deployment in the subnet will share the address space (range of IP addresses that resources can use within the VNet), so ensure the subnet is adequately sized to scale the deployments.
- Typically you can deploy NGINXaaS in under 5 minutes.
- There’s no downtime during updates to NGINXaaS.
- No, there’s no downtime while scaling out/in.
- In any Azure region with more than one availability zone, NGINXaaS provides cross-zone replication for disaster recovery. See Architecture for more details.
- Yes. You can overwrite the NGINX default protocol to configure the desired TLS/SSL policy. Read more about the procedure in the Module ngx_http_ssl_module documentation.
- NGINXaaS supports up to 100 TLS/SSL certificates.
- Yes, NGINXaaS natively integrates with Azure Key Vault, so you can bring your own certificates and manage them in a centralized location. You can learn more about adding certificates in Azure Key Vault in the SSL/TLS Certificates documentation.
- Yes, the subnet can contain other resources and is not dedicated to the NGINXaaS for Azure resources; ensure the subnet size is adequate to scale the NGINXaaS deployment.
- Yes, an NSG is required in the subnet where NGINXaaS will be deployed to ensure that the deployment is secured and inbound connections are allowed to the ports the NGINX service listens to.
Can I restrict access to NGINXaaS based on various criteria, such as IP addresses, domain names, and HTTP headers?
- Yes, you can restrict access to NGINXaaS by defining restriction rules at the Network Security Group level or using NGINX’s access control list. To learn more, see the NGINX module ngx_http_access_module documentation.
- NGINX currently supports VNet, and VPN gateway if they do not have limitations. Known limitations can be found in the Known Issues.
- Yes, NGINXaaS supports end-to-end encryption from client to upstream server.
NGINXaaS supports the following two types of logs.
Access Log: To troubleshoot server issues, analyze web traffic patterns and monitor server performance. For more details, please see the Module ngx_http_log_module documentation.
Error Log: To capture, troubleshoot and identify issues that may occur during the server’s operations, such as 400 bad requests, 401 unauthorized, 500 internal server errors, etc. For more details, please see the Core functionality documentation.
What is the retention policy for the above logs? How long are the logs stored? Where are they stored?
- NGINXaaS logs are stored in customer’s storage. Customers can custom define the retention policy. Customers can configure the storage by following the steps outlined in the NGINXaaS Logging documentation.
- You can set up an alert with NGINXaaS by following the steps outlined in the Configure Alerts documentation.
- Yes, see the Application Performance Management with NGINX Variables documentation to learn more about tracing.
- No; NGINXaaS will deploy the right resources to ensure you get the right price-to-performance ratio.
- Yes, you can bring your own configurations or create a new configuration in the cloud. See the NGINXaaS Deployment documentation for more details.
- Yes, the “ssl_certificate” directive can be specified multiple times to load certificates of different types. To learn more, see the Module ngx_http_ssl_module documentation.
- In addition to HTTP to HTTPS, HTTPS to HTTP, and HTTP to HTTP, NGINXaaS provides the ability to create new rules for redirecting. See How to Create NGINX Rewrite Rules | NGINX for more details.
What content types does NGINXaaS support for the message body for upstream/NGINXaaS error status code responses?
- Customers can use any type of response message, including the following:
- Once you successfully deploy NGINXaaS, you can double-click on NGINXaaS in the Azure portal; you can see both public and private IP addresses, as shown in the following screenshot:
- The NGINXaaS deployment IP doesn’t change over time.
- No; NGINXaaS provides manual scaling. You can change the scaling unit by entering the desired capacity, up to 160 NCUs.
- Currently, we can’t manually start/stop NGINXaaS. You have the option to delete the deployment and re-deploy at a future date.
- No, we do not currently support a change in the virtual network or subnet for an existing NGINXaaS deployment.
- NGINXaaS is a Layer 7 HTTP protocol. To configure .com and .net servers, refer to the server name in the server block within the HTTP context. To learn more, and see examples, follow the instructions in the NGINX Configuration documentation.
If I remove/delete an NGINXaaS deployment, what will happen to the eNICs that were associated with it?
- When you remove or delete an NGINXaaS deployment, the associated eNICs will automatically be deleted.
Each NGINXaaS deployment has two resource groups associated with it:
- A resource group specified by the user as part of creating the deployment. This resource group contains the NGINXaaS deployment and is fully controlled by the end user.
- A secondary managed resource group created by the NGINXaaS service which contains networking resources related to the NGINXaaS deployment. The service manages the lifecycle of resources created within the managed resource group. The default naming convention for the managed resource group is NGX_MyResourceGroup_MyDeployment_Location. Users can name the managed resource group when creating a deployment with the Azure CLI, Terraform, and other client SDKs.
Things to keep in mind when working with the managed resource group:
- You cannot specify the managed resource group when creating the deployment with the Azure Portal.
- You cannot use an existing resource group as the managed resource group.
- The resource group should belong to the same subscription as the NGINXaaS deployment.
- You should not modify any exisitng properties on the resource group such as tags or related resource within it. Doing so might cause issues with deployment operations such as upgrades, scaling, and deletions. For example, locking the resource group or resources within it might cause issues with the NGINXaaS deployment.
The specific permissions required to deploy NGINXaaS are:
Additionally, if you are creating the Virtual Network or IP address resources that NGINXaaS for Azure will be using, then you probably also want those permissions as well.
Note that assigning the managed identity permissions normally requires an “Owner” role.
- Yes. If your DNS nameservers are configured in the same VNet as your deployment, then you can use those DNS nameservers to resolve the hostname of the upstream servers referenced in your NGINX configuration.
- No. While changing the value of the directive in the config is allowed, the change is not applied to the underlying NGINX resource of your deployment.
- Due to port restrictions on Azure Load Balancer health probes, ports
993are not allowed. The NGINXaaS deployment can listen on all other ports. The maximum number of unique ports a configuration can listen on is 5. Configurations that specify over 5 unique ports will be rejected.
- NGINXaaS is billed monthly based on hourly consumption.
- The NGINX agent periodically gathers connection and request statistics via an internal HTTP request. An Azure service health probe checks for status via a TCP connection for each listen port in the NGINX configuration, incrementing the connection count for each port. This contributes to minimal traffic and should not affect these metrics significantly.
- You can use an existing subnet to create a deployment. Please make sure that the subnet is delegated to
NGINX.NGINXPLUS/nginxDeploymentsbefore creating a deployment in it. To delegate a subnet to an Azure service, see Delegate a subnet to an Azure service.
- NGINXaaS supports certificate rotation. See the Certificate Rotation documentation to learn more.