Service Mesh: what is it?
The service mesh is the dedicated infrastructure layer that controls service-to-service communication on a network. This method, which allows separate parts of an application to communicate with each other, is used on sites such as Twitter, Amazon, Airbnb, Netflix and even the search engine Google. In fact, most modern applications are designed as distributed collections of microservices. Each of these microservices performs a specific business function. The service Mesh will facilitate service-to-service communication and add functionalities such as observability, traffic management and security, without the need to integrate them directly into the code. Find out in this article what the service mesh actually involves.
Definition and architecture of Service Mesh
The service mesh refers first of all to the infrastructure that enables simplified communication between several services. However, this term is also used to refer to the program used to implement this model. The best known are Linkerd, Buoyant, Istio, Maesh or Zuul, a Java reverse proxy. Finally, by extension, service mesh also refers to the security network created when using this configuration.
In fact, with the rise of microservice architectures and distributed services replacing the traditional monolithic approach, the aims and functions of services meshes have broadened. In addition to simplifying communication, they must now meet more complex operational and security requirements such as:
metrics, monitoring and network surveillance;
load balancing, which is a technique for distributing the workload evenly among servers or other computing resources to optimise reliability, stability and network capacity;
Canary deployments that make the new version of an application available to some users before others;
availability with the use of software controlled by APIs rather than hardware;
end-to-end encryption and authentication;
recovery from failure.
To perform these functions, the service mesh typically offers network management flexibility beyond the capabilities of traditional API gateways.
How does the service mesh work?
The service mesh architecture is based on one or more proxy instances known as sidecar. The name comes from their position in relation to the application. Like a sidecar attached to a motorbike, the sidecar proxy is attached to a parent program to add or extend functionality.
In the case of the service mesh, sidecar proxies are used at the Data Plane and Control Plane of the service mesh.
The Data plane and the Control plane are the two high-level components found in service mesh architectures.
The Data Plane (DP) is responsible for the communication of services within the mesh and can provide load balancing, disaster recovery and encryption functionality through a dedicated infrastructure layer. It is composed of a set of stateless sidecar proxies that will intercept all network traffic and offer a wide range of features depending on their configuration.
The Control Plane (CP) supervises these individual proxies and transforms them into a distributed system. The Data Plane proxies are also attached to the Control Plane which manages and configures each sidecar depending on the service it is linked to. In this way the CP allows developers to easily configure all the Data Planes running in the mesh. Control Planes can also connect to a CLI or GUI interface to simplify application management.
When should you use a service mesh?
For DevOps and applications based on numerous microservices
The operation of the service Mesh highlights its major advantage: centralisation.
Applications structured with a microservices architecture may comprise several dozen services, each with its own instance and interacting in the environment. Managing this organisation, including monitoring the status and performance of each service in the application, can therefore quickly become complex for developers.
The service mesh will allow them to isolate and manage communications between services in a separate, centralised infrastructure layer. The service mesh pattern therefore becomes particularly advantageous as the number of microservices in an application increases.
For DevOps teams with an established CI/CD production pipeline, the service mesh can also be very useful for programmatically deploying applications through an application infrastructure (Kubernetes). It also allows them to manage networking and security policies directly in the code.
Advantages and disadvantages of service mesh
The service mesh is intended to make it easier to manage service-to-service communication, but it does not solve all problems.
The benefits of the service mesh
simplified communication between services;
easier diagnosis of communication errors that now occur in a dedicated infrastructure layer
faster development and deployment;
more concrete tests with, in particular, the possibility of injecting a latency or a failure, to simulate their effects in a real situation
better resistance to downtime, as the service mesh can redirect requests by avoiding failed services;
support for security features (encryption, reliability, authentication, etc.). The service mesh also provides a certificate authority that generates one certificate per service for TLS communication between services.
The disadvantages of service mesh
an increase in the number of execution instances with the risk of adding latency to the architecture;
no support for integration with other systems and services;
complexity of network management centralised, but not eliminated;
a configuration that requires familiarity and skills in the development teams.
Despite these drawbacks, the advantages and functionality of service meshes mean that they are still widely used in large applications and those running on Kubernetes. What about you - do you work with this setup? What do you think of its advantages and disadvantages? Feel free to share your feedback on service mesh on the forum!
The main service Mesh providers and platforms