End of Sale Notice:

Commercial support for NGINX Service Mesh is available to customers who currently have active NGINX Microservices Bundle subscriptions. F5 NGINX announced the End of Sale (EoS) for the NGINX Microservices Bundles as of July 1, 2023.

See our End of Sale announcement for more details.

What is NGINX Service Mesh?

Learn about NGINX Service Mesh fundamentals.

What is a Service Mesh?

A service mesh is an infrastructure layer that’s designed to provide fast, reliable, and low-latency network connections for highly distributed applications requiring inter-process communications. A service mesh abstracts lower layer networking concerns away from application developers and business logic. Meshes are often optimized for container environments and seamlessly integrate into the orchestration system, providing consistent and reliable services for the ephemeral, scalable, and dynamic applications built honoring today’s modern architectures. Common properties defined by service meshes include service discovery, identity, load balancing, encryption, traffic control, resiliency and availability, and observability.

Various patterns exist for implementing service meshes, such as node level, service level, and application libraries. While implementations may differ, each service mesh generally serves the same purpose: an infrastructure layer that operates at the network interstitially between an application’s distributed components.

  • Node level: a single mesh instance on a per-machine basis (cluster node or application host). The node-level mesh provides services for all application workloads via a single proxy per host, routing all application traffic to and from a single process.

  • Service level: a network component that resides alongside and close to each individual application service instance (a 1:1 relationship of mesh instance to service runtime instance). Each workload is allocated a dedicated proxy. This implementation may also be referred to as the sidecar pattern and is the most popular implementation style. The advantage of the sidecar (and node level) pattern is a decoupling of mesh and business logic functions.

  • Application libraries: each application workload is compiled and linked to libraries that provide network and mesh functions. Application business logic must be designed, developed, and possibly recompiled to use the library SDKs. Service mesh properties are incorporated into the binary and are coupled to the design and code of each business logic entity.

NGINX Service Mesh implements the service level (sidecar) pattern. Each mesh functional unit resides next to the orchestration system’s smallest unit of abstraction (for instance, in a Kubernetes environment, there is a proxy per Pod). Sidecars handle interservice communications, monitoring, and security‑related concerns. In other words, anything that can be abstracted away from the individual services. This way, developers can handle development, support, and maintenance for the services’ application code, and operations teams can maintain the service mesh and run the app.

Service mesh abstractions have been acknowledged as important pieces to enable microservice architectures.

Service Mesh Concepts

Microservices, and in turn, container orchestration systems and service mesh, come with their own terminology for component services and functions:

  1. Microservices Architecture: Also known as microservices, it is an architecture design enabling rapid development, quick iteration, independently scalable components, decoupled business functions, and reliable deployment of complex and distributed applications. It emphasizes strong boundaries of responsibility, highly specialized, independent functional components, and the Unix philosophy of design (functional components should be designed to do one thing well; “simple, short, clear, modular, and extensible code”).

  2. Container orchestration system: Container workloads and orchestration systems enable microservice architectures. With its focus on small, independent, and scalable components, a microservice architecture leads to a multiplication of processes needing management. In basic terms, a container is a bundle of software and its dependencies packaged together to run isolated in a virtual environment. Container orchestration refers to the lifecycle management, monitoring, and configuration of these individualized software bundles. Various options exist, such as Docker Swarm and Mesosphere DC/OS, with Kubernetes being the current de-facto market standard. NGINX Service Mesh only supports the Kubernetes orchestration system.

  3. Service Abstraction: A service can refer to the single running copy – the host machine process – of a microservice application; that is, one instance of one component of an aggregated and distributed application. Alternatively, a service may refer to the logical boundary around a functional unit, omitting the individual instances doing the work as they all work together performing identical functions. Kubernetes uses the latter definition: a Service is a configuration abstraction representing the conglomeration of replicated instances. Kubernetes’s fundamental unit of work is a Pod, each being a replica. It’s noteworthy that the fundamental quantum for Kubernetes is not the container, but the Pod, which can be one or many containers. In Kubernetes, clients rarely communicate with Pods but communicate instead through the Service abstraction. NGINX Service Mesh carries this abstraction forward, requiring functional components to be represented as Services before providing infrastructure access to the individual replicas.

  4. Controller pattern: As with control theory, robotics, and automation, Kubernetes uses a controller pattern to regulate the system’s state. Controllers are often implemented as event loops reacting to and actuating the desired state of the system. Each individual controller may actuate on one isolated configuration resource with its side-effects internal to the cluster itself. Or the controller may watch and create relationships across multiple configuration objects and make state changes to the cluster itself and external resources throughout the environment. Ultimately, each controller operates in a loop – actuating, enforcing, and repairing the desired state of the system. NGINX Service Mesh control plane implements this pattern across many Kubernetes configuration primitives and custom extensions. These controllers work together to maintain a stable mesh infrastructure for application components.

  5. Sidecar pattern: As mentioned earlier, NGINX Service Mesh uses the container sidecar pattern to steer traffic, enforce policy, provide resiliency, and abstract network concerns from the application business logic. The sidecar pattern places a sibling container “next” to workload containers – often, this means sharing the net and IPC namespaces, among others – and enables the augmentation, enhancement, or extension of a process without requiring changes to the original process or application. Kubernetes provides features to allow this pattern. For instance, multiple containers can reside in a single Pod, and configurations can be mutated to add containers before being accepted (this is known as injection). Mesh operators can opt-in and opt-out of this behavior at various layers; but, in practice, each application Pod within the mesh will have a sidecar injected. This sidecar is responsible for the infrastructure properties provided by the mesh as a whole.

Service Mesh Properties

NGINX Service Mesh provides the following properties enabled by its administrative and functional configuration resources. We intend to offer a high-level explanation of important service mesh properties, with each corresponding feature discussed in more detail within our guides and tutorials.

  1. Service Discovery: As individual instances appear and disappear from existence, all other running services need to know how to reach them and where. Typically, the instance performs a DNS lookup for this purpose. The container orchestration framework keeps a list of instances that are ready to receive requests and provides the interface for DNS queries. The service mesh does not interfere with this process and maintains the service list while also overlaying other mesh properties to the known services. For example, Kubernetes includes and manages DNS servers within the cluster. DNS entries for each Pod (individual workload instances) and Services (the load balancing and collective functional unit abstraction) are managed according to the resource lifecycle. NGINX Service Mesh maintains an awareness of the same IP address sets, passing traffic to the proper workloads based on its independent load balancing algorithms (see below).

  2. Identity: Each registered service and the underlying entities receive an immutable identity. These identities form a trust domain and a simple base authentication layer for included service instances. The identity system works in accord with encryption schemes to build a zero-trust environment and a foundation for more advanced traffic control, shaping, and enforced application topologies when the basic, flat landscape is undesirable. NGINX Service Mesh provides and enforces identity using SPIFFE and the SPIRE runtime (see the Architecture section for details). Workload identity, rooted by Kubernetes ServiceAccounts and verified via the SPIRE runtime, forms the foundation of the NGINX Service Mesh’s Access Control features (see the Services using Access Control tutorial for a hands-on guide to NGINX Service Mesh’s authorization solution).

  3. Load Balancing: Most orchestration frameworks already provide Layer 4 (transport layer) load balancing. A service mesh implements more sophisticated Layer 7 (application layer) load balancing, with richer algorithms and more powerful traffic management. Load‑balancing parameters can be modified via the API, making it possible to orchestrate blue‑green or canary deployments. NGINX Service Mesh supports multiple load balancing algorithms. Further details are documented in the Load Balancing section.

  4. Encryption: The service mesh can offload complicated encryption and decryption responsibilities from functional components while also providing PKI management capabilities. The service mesh can add near-universal encryption between application endpoints with relative ease. The service mesh is optimized to provide connection re-use, session persistence, and mutual TLS (mTLS) with little to no administrative input; the generation and distribution of certificates and keys are handled automatically. For more information on securing traffic with NGINX Service Mesh, see the Secure Mesh Traffic using mTLS guide.

  5. Traffic Control: The service mesh makes it possible to control traffic at the application layer (Layer 7). Topologies can be created where tiers of access or specific point-to-point communications are enabled and disabled. The service mesh can provide efficient authorization functionality, allowing transactions at a granular level: endpoints, paths, methods, among others. Application instances can be protected while in development, individual features enabled and tested dynamically, and traffic shaped using blue-green and canary deployment patterns. NGINX Service Mesh supports access control and traffic shaping using Traffic Policies for more advanced traffic topologies.

  6. Resiliency and Availability: In conjunction with Service Discovery and Load Balancing, the service mesh will optimize for uptime. Many microservices use stateless, “fail fast” design patterns. This allows for service instances to scale up and down quickly, minimizing failure damage and reacting promptly to changing usage and environmental states. The service mesh will perform connection and transaction retries, dynamically updating its known instances while also protecting server-side connections with configurable rate limit and circuit breaker settings. Each NGINX Service Mesh instance provides classic reverse proxy features in addition to other resiliency patterns, our NGINX SMI Extensions guide discusses these features in greater detail.

  7. Observability: As services expand, functional components multiply. As the number of service instances increases, an application can become inscrutable to developers and administrators. The service mesh has a complete view of the system and will provide insights into the application’s operation and performance. NGINX Service Mesh will provide discrete metrics and tracing data that can be aggregated by popular projects like Prometheus and Jaeger. Observability (metrics and tracing) is a fundamental mesh property. To learn more about NGINX Service Mesh’s offering, see our Monitoring and Tracing guide.