Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized... Membership! Membership!

Tweet Register as an member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Why Use a Service Mesh in Your Kubernetes Environment

March 21, 2022 No Comments

Featured article by Jeff Broth


As applications grow in complexity, their networking, and management requirements also increase. Application components are decoupled with the popularity of microservices-based architectures, bringing unparalleled flexibility and modularity both in development and management. However, users must ensure that there is proper connectivity between different components and resources to facilitate these complex decoupled architectures. It can be a complex and time-consuming task. Service meshes come into play here by offering a dedicated infrastructure layer to control network communications.

What is a Service Mesh?

A service mesh enables the separation of the business logic of an application from network management, security, and monitoring. Configuring each microservice with all these things is very difficult when dealing with multiple services. When the size of microservices grows, things get even more impractical.

Decoupling business logic allows developers to focus on the application functionality while network specialists and operations teams can focus on configuring the networking, security, and monitoring. This is achieved through a sidecar container that is injected into Pods by the service mesh. That sidecar contains a proxy that intercepts all the traffic from the container and modifies it to utilize the service mesh.

The proxies are the data plane of the service mesh that manages communication between services, while the control plane of the mesh manages the behavior of the proxy. The control plane allows users to control all aspects of the service mesh, such as traffic control, resilience, and security. Some popular service mesh options include Istio, HashiCorp Consul, and Linkerd.

The Functionality of a Service Mesh

The main thing to remember is that service mesh does not introduce any new functionality, and users need to specify how traffic needs to be routed. As mentioned previously, a service mesh abstracts the logic of service-to-service communication out of individual services into a dedicated infrastructure layer.

If we look at a service mesh like Istio, it uses the lightweight envoy proxies running as sidecars in a Kubernetes Pod to enable communication between other services. These sidecars create the mesh network used in the service mesh. Users can configure the communication policies in the data plane. Users must create targeted policies at a service or application level, as these policies can affect the Kubernetes configurations.

Once the Kubernetes cluster receives a network request, the control plane routes the traffic within the mesh by managing the proxies using the policies defined by the users. Istio also generates telemetry data ranging from metrics and logs to traces for all activity within the mesh. This information allows users to gain a top-down view of the entire network. Service meshes provide the following benefits to enable running microservices at scale.

Complete observability over the environment leads to easier troubleshooting and optimizations. As the control plane tracks the performance on a service-by-service basis, users can target their optimizations to the exact services that are facing performance issues.

– Users can easily support any type of release strategy without complex configuration changes due to the flexible nature of this network.

– The availability and resilience of the network and services can be greatly increased with features like failovers, circuit breaks, and fault injections.

– Enable secure communication with built-in support for authentication, authorization, and the ability to encrypt network traffic.

– Extensive load balancing configurations and routing controls to facilitate any network needs.

– Automated service discovery across the Kubernetes environment.

Why use a ServiceMesh?

Service mesh seems to be the ideal tool to manage networking within Kubernetes. Yet, is the additional management and maintenance overhead introduced by a service mesh to make it worth any application deployment? Service meshes are geared towards large-scale applications consisting of many microservices. It does not mean that a service mesh cannot be utilized for small or medium-scale applications. Yet, it will provide few benefits compared to a large-scale application that can best use a service mesh.

A service mesh can facilitate complex routing capabilities and optimize data flow between services regardless of the network traffic growth. Additionally, developers can solely focus on the service functionality without worrying about the network requirements, as service mesh decouples the network logic. As secure communication is a core feature of a service mesh, users can secure communications between services with relative ease while having complete network observability.

From a DevOps standpoint, services meshes will be essential to facilitate seamless deployment of services to a Kubernetes cluster when creating CI/CD pipelines. Service mesh also enables users to codify their networking and security policies and manage them through CI/CD pipelines. It also helps easily implement operations frameworks like GitOps and create better automated processes.


Service meshes have become integral to facilitating communications between services in microservices-based architectures. It provides near-unlimited scalability, security, and control over service communications across a Kubernetes environment as a dedicated infrastructure layer that abstracts network logic from services.


Sorry, the comment form is closed at this time.