Openshift Service Mesh: Unlocking Microservices Efficiency

Author

Reads 663

Alluring Flioght Attendant serving VIP Passengers
Credit: pexels.com, Alluring Flioght Attendant serving VIP Passengers

Service mesh technology is a game-changer for microservices architecture. It provides a way to manage service-to-service communication, making it easier to monitor, secure, and maintain complex systems.

With Openshift Service Mesh, you can expect a significant reduction in the complexity of your microservices setup. This is because the service mesh takes care of the underlying communication, allowing you to focus on building your applications.

One of the key benefits of Openshift Service Mesh is its ability to provide real-time traffic management. This means you can easily route traffic between services, reducing latency and improving overall system performance.

Architecture

Red Hat OpenShift Service Mesh is logically split into two main components: the data plane and the control plane. The data plane consists of intelligent proxies deployed as sidecars that intercept and control all inbound and outbound network communication between microservices in the service mesh.

These sidecar proxies communicate with Mixer, the general-purpose policy and telemetry hub, and Envoy proxy intercepts all inbound and outbound traffic for all services in the service mesh.

Credit: youtube.com, Istio & Service Mesh - simply explained in 15 mins

The control plane manages and configures proxies to route traffic, and configures Mixers to enforce policies and collect telemetry. It consists of several components, including Mixer, Pilot, Citadel, and Galley.

Here are the components of the control plane:

  • Mixer enforces access control and usage policies and collects telemetry data.
  • Pilot configures the proxies at runtime, providing service discovery, traffic management, and resiliency.
  • Citadel issues and rotates certificates, providing strong service-to-service and end-user authentication.
  • Galley ingests, validates, and distributes the service mesh configuration.

Red Hat Architecture

Red Hat OpenShift Service Mesh is logically split into a data plane and a control plane.

The data plane is made up of intelligent proxies deployed as sidecars, which intercept and control all network communication between microservices.

Envoy proxy is the key component of the data plane, intercepting all inbound and outbound traffic for all services in the service mesh.

Mixer is the general-purpose policy and telemetry hub that enforces access control and usage policies, and collects telemetry data from the Envoy proxy and other services.

Here are the key components of the control plane:

Red Hat OpenShift Service Mesh also uses the istio-operator to manage the installation of the control plane.

Distributed Architecture

Credit: youtube.com, Mastering Highly Distributed Architecture with Neo4j — Thomas Lawrence, Amadeus

In a distributed architecture, services are designed to work together across different locations, making it easier to scale and manage complex systems. This approach is particularly useful in cloud-native, microservices-based applications.

Red Hat OpenShift Service Mesh is a key component in distributed architecture, logically splitting into a data plane and a control plane. The data plane consists of intelligent proxies deployed as sidecars, which intercept and control network communication between microservices.

The control plane manages and configures proxies to route traffic and enforce policies. It includes Mixer, which enforces access control and usage policies, and Pilot, which provides service discovery and traffic management capabilities.

Red Hat OpenShift distributed tracing provides high scalability and no single points of failure, making it suitable for large-scale systems. Distributed Context Propagation enables you to connect data from different components together to create a complete end-to-end trace.

The distributed tracing platform is based on the open source Jaeger project, which consists of several components, including Jaeger Client, Jaeger Agent, Jaeger Collector, Storage, Query, Ingester, and Jaeger Console.

Credit: youtube.com, Elements of Distributed Architectures - Mark Richards

Here are the main components of the distributed tracing platform:

  • Jaeger Client: responsible for instrumenting applications for distributed tracing
  • Jaeger Agent: listens for spans sent over UDP and batches them for the collector
  • Jaeger Collector: receives spans and places them in an internal queue for processing
  • Storage: a persistent storage backend for span data
  • Query: retrieves traces from storage
  • Ingester: reads data from Kafka and writes it to another storage backend
  • Jaeger Console: provides a user interface for visualizing distributed tracing data

Understanding

Kiali provides visibility into your service mesh by showing you the microservices in your service mesh, and how they are connected.

The service mesh is logically split into a data plane and a control plane. The data plane is a set of intelligent proxies deployed as sidecars, which intercept and control all inbound and outbound network communication between microservices.

The control plane manages and configures proxies to route traffic, and configures Mixers to enforce policies and collect telemetry. Mixer enforces access control and usage policies, and collects telemetry data from Envoy proxy and other services.

Red Hat OpenShift Service Mesh uses the istio-operator to manage the installation of the control plane. An Operator is a piece of software that enables you to implement and automate common activities in your OpenShift Container Platform cluster.

Here's a breakdown of the control plane components:

What Is a Service Mesh

Credit: youtube.com, Istio & Service Mesh - simply explained in 15 mins

A service mesh is a layer of infrastructure that enables communication between microservices in a distributed system. It's like a network of roads that helps different services talk to each other.

Service meshes provide features such as security, traffic management, and traffic flow visualizations, which are essential for modern, cloud-native applications. They enable teams to implement these functions without having to design solutions from scratch.

A service mesh typically consists of a control plane and a data plane. The control plane manages the configuration and lays out the environment, while the data plane handles the communication between services. In Istio, for example, the control plane programs and deploys proxy sidecars alongside application pods.

The data plane is made up of the sidecars, which represent the communication between microservices. In Red Hat OpenShift Service Mesh, the data plane is a set of intelligent proxies deployed as sidecars that intercept and control all inbound and outbound network communication.

Credit: youtube.com, Istio Service Mesh Explained

Some key components of a service mesh include Envoy proxies, Mixers, and Pilots. Envoy proxies intercept traffic and communicate with Mixer, which enforces policies and collects telemetry data. Pilots configure the proxies at runtime and provide service discovery, traffic management, and resiliency.

Here are some key functions of a service mesh:

  • Monitor distributed transactions
  • Optimize performance and latency
  • Perform root cause analysis

Distributed tracing is a feature of service meshes that allows you to instrument your services to gather insights into your service architecture. It's based on the open source Jaeger project and provides a way to monitor, profile, and troubleshoot your microservices-based applications.

Upstream Comparison

OpenShift Service Mesh (RH OSSM) is a distribution of Istio that offers tighter integration and certification for Red Hat OpenShift. This means it's specifically designed to work seamlessly with OpenShift.

OSSM also includes additional security hardening not found in upstream Istio. This provides an extra layer of protection for your applications.

Red Hat's enterprise-grade customer support is unparalleled, extending to all components of the service mesh. This support is included with OCP licenses, so you don't have to pay extra.

Credit: youtube.com, What in the world is upstream & downstream?

Upstream Istio, on the other hand, is a fast-moving open source project with a shorter support lifespan. This can be a concern for organizations that need long-term support and stability.

Here's a summary of the key differences between RH OSSM and upstream Istio:

  • OpenShift Integration: RH OSSM has tighter integration with OpenShift
  • Advanced Security: RH OSSM includes additional security hardening
  • Enterprise Support: RH OSSM has longer support lifespan and included support with OCP licenses

Understanding Red Hat

Red Hat's service mesh is built on top of open-source projects, making it a powerful and flexible tool.

Istio is a key component of Red Hat's service mesh, handling traffic control and ensuring that applications are running smoothly.

Kiali is another important part of the service mesh, providing traffic visualization to help developers understand how their applications are interacting with each other.

Jaeger is the third component of Red Hat's service mesh, offering request tracing to identify and diagnose issues in complex systems.

To manage and deploy applications, Red Hat recommends using OpenShift Operators, which can be installed via the OperatorHub.

Red Hat Distributed Features

Red Hat OpenShift distributed tracing provides several key capabilities, including integration with Kiali, high scalability, distributed context propagation, and backwards compatibility with Zipkin.

Credit: youtube.com, What is Linux, open source, and distributions?

Red Hat OpenShift distributed tracing is based on the open source Jaeger project and consists of two main components: the distributed tracing platform and distributed tracing data collection.

You can use Red Hat OpenShift distributed tracing to monitor distributed transactions, optimize performance and latency, and perform root cause analysis.

Some of the key features of Red Hat OpenShift distributed tracing include:

  • Integration with Kiali
  • High scalability
  • Distributed Context Propagation
  • Backwards compatibility with Zipkin

Red Hat OpenShift distributed tracing is designed to have no single points of failure and to scale with business needs, making it a reliable and efficient solution for distributed tracing.

Routes vs Ingress

In OpenShift, Routes and Ingress Service Mesh have a key difference. OpenShift Route uses the OpenShift Ingresscontroller/Router (Haproxy) to direct traffic into the cluster, pointing to a specific service with labels.

To take advantage of Service Mesh features, you need to use Istio resources: Ingress Gateway, VirtualService, and DestinationRule.

The Ingress Gateway is used to get traffic into the cluster and into the Service Mesh, and a Route points to the Istio Ingress Gateway instead of accessing it directly.

Credit: youtube.com, Kubernetes Ingress in 5 mins

Here are the main components used in this process:

  • Ingress Gateway
  • VirtualService
  • DestinationRule

The Route points to the Istio Ingress Gateway using the host of {app}{app-ns}{istio-ns}.apps and the port for http2.

The Gateway is defined in the namespace of our apps, with a selector that links it to the Istio Ingress Gateway.

Key Features

The key features of OpenShift Service Mesh are what make it a powerful tool for managing microservices.

The Kiali console, integrated with Red Hat Service Mesh, offers a range of capabilities to help you monitor and manage your applications.

You can quickly identify issues with applications, services, or workloads using the Health feature.

The Topology feature provides a visual representation of how your applications, services, or workloads communicate via the Kiali graph.

Red Hat OpenShift distributed tracing features also play a crucial role in OpenShift Service Mesh.

With distributed tracing, you can view distributed tracing data from the Kiali console when properly configured.

Here are some of the key features of Red Hat OpenShift distributed tracing:

  • Integration with Kiali
  • High scalability
  • Distributed Context Propagation
  • Backwards compatibility with Zipkin

Setup and Configuration

Credit: youtube.com, Getting up and running with an OpenShift Service Mesh

To set up and configure OpenShift Service Mesh, you'll need to install the necessary Operators from OperatorHub. This can be done using the CLI command `oc apply -f manifests/ossm-sub.yaml`. The installation process also involves waiting for the `servicemeshcontrolplanes.maistra.io` and `kialis.kiali.io` CRDs to be established.

The installation process will output the status of the custom resource definitions, indicating when they are met. You'll also see the creation of a subscription for the Jaeger product and the Kiali Operator. The CSV (Cluster Service Version) for the Operators will also be displayed, showing the version, display name, and phase of the installation.

To verify the installation, you can check the status of the control plane by running the `get-smcp-status.sh` script or using the CLI command `oc get smcp/basic -n istio-system`. This will show you the status of the control plane, including the number of components ready and the version of the Service Mesh.

Setup Control Plane

Credit: youtube.com, Episode 15: Setting up Istio with external control plane

To set up a control plane, you'll need to install specific Operators from OperatorHub. This involves installing the Jaeger Operator, Kiali Operator, and Service Mesh Operator using the CLI and applying the manifests from the `ossm-sub.yaml` file. You'll also need to wait for the CRD conditions to be met, which should take around 180 seconds.

The installation process will output the custom resource definitions and subscription information. You can verify this by checking the CSVs in the `openshift-operators` namespace.

Here's a summary of the installed Operators:

Once the installation is complete, you can create a control plane by creating a ServiceMeshControlPlane CRD. This is done by applying the `ossm-sub.yaml` file and waiting for the control plane to be established.

Apply to Partner Micro

Now that we've set up customer microservice ingress routing, it's time to apply the same principles to the partner microservice. We've already established that everything is working smoothly with the customer microservice, so we can build on that momentum.

Computer server in data center room
Credit: pexels.com, Computer server in data center room

To apply the istio routing to the partner microservice, we follow the same steps we used for the customer microservice. This will ensure consistency and make it easier to manage our microservices.

With the partner microservice set up, we can now leverage the benefits of istio routing, including traffic management and security. This will help us to better control and monitor our application's traffic flow.

Migrations & Updates

Migrations & Updates are a breeze with OpenShift Service Mesh, thanks to features like A/B deployments and Canary deployments. These configurations allow for seamless updates or migrations of services or entire clusters without any downtime.

A/B deployments enable you to deploy new services alongside existing ones, slowly routing traffic to the new service over time. This is made possible with Destination Rules, which allow you to control traffic flow between services.

With Canary deployments, you can test new services with a small percentage of users before rolling them out to everyone. This reduces the risk of downtime or errors during the update process.

Traffic Management

Credit: youtube.com, Ask an OpenShift Admin (Ep 61) | Service Mesh

Traffic Management is a crucial aspect of OpenShift Service Mesh. You can control the flow of traffic and API calls between services using OpenShift Service Mesh.

With OpenShift Service Mesh, you can route traffic based on weights, HTTP headers, and more. This is useful for advanced traffic routing like canary deployments, blue-green deployments, A/B testing, and more.

OpenShift Service Mesh uses Envoy's configurable proxying functionality to achieve this. This functionality can be controlled using Destination Rules and other configuration options.

A virtual service lets you configure how requests are routed to a service within an Istio service mesh. You can use virtual services to route traffic to a given destination, and then use destination rules to configure what happens to traffic for that destination.

Destination Rules are applied after virtual service routing rules are evaluated. You can use destination rules to specify named service subsets, such as grouping all a given service's instances by version.

Credit: youtube.com, Service Mesh for Microservices - Traffic Management

Gateways are used to manage inbound and outbound traffic for your mesh. You can use gateways to specify which traffic you want to enter or leave the mesh.

Here are some key components of OpenShift Service Mesh for Traffic Management:

  • A virtual service lets you configure how requests are routed to a service within an Istio service mesh.
  • Destination Rules are used to specify named service subsets, such as grouping all a given service's instances by version.
  • Gateways are used to manage inbound and outbound traffic for your mesh.

Observability and Monitoring

Observability and monitoring are crucial for understanding and debugging complex microservices architectures. OpenShift Service Mesh integrates with tools like Kiali, Jaeger, and Prometheus to provide observability.

Kiali provides a graphical network topology visualization of services and metrics on RPCs between them. This helps developers understand how microservices are connected.

Jaeger enables distributed tracing, which follows the path of a request through various microservices. This helps developers visualize call flows in large service-oriented architectures.

Distributed tracing records the execution of individual requests across the whole stack of microservices and presents them as traces. A trace is a data/execution path through the system.

To get started with observability in OpenShift Service Mesh, you can check the Kiali Console. To do this, log in to the OpenShift Developer Console, select the project "istio-system", and open the Kiali console.

Credit: youtube.com, OpenShift Service Mesh For Observability in 1 Hour 71620

Here's a step-by-step guide to check the Kiali Console:

  • Check the Kiali Console
  • Run the following command to update the frontend virtual service with weight routing: `cat manifests/frontend-virtual-service-with-weight-routing.yaml | sed 's/DOMAIN/'$DOMAIN'/'|oc apply -n project1 -f -`
  • Patch the virtual service with the new weight routing: `oc patch virtualservice frontend --type='json' -p='[{"op":"replace"",path":"/spec/http/0"",value":{"route":[{"destination":{"host":"frontend.project1.svc.cluster.local"",port":{"number":8080}",subset":"v1"}",weight":70},{"destination":{"host":"frontend.project1.svc.cluster.local"",port":{"number":8080}",subset":"v2"}",weight":30}]}}]' -n project1`
  • Get the frontend Istio route: `FRONTEND_ISTIO_ROUTE=$(oc get route -n istio-system|grep frontend-gateway |awk '{print $2}')`
  • Use a loop to continuously check the output of the frontend Istio route: `while [ 1 ]; do OUTPUT=$(curl -s $FRONTEND_ISTIO_ROUTE) printf "%s

" $OUTPUT sleep .2 done`

By following these steps, you can observe the traffic analysis for the frontend app in the Kiali Console. To do this, select Application->frontend->inbound traffic and outbound traffic.

Security and Access

OpenShift Service Mesh takes security and access to the next level by securing service-to-service communication via mutual TLS, or mTLS. This ensures that data is encrypted in transit, providing an additional layer of protection.

With OpenShift Service Mesh, you can enforce fine-grained access control policies between services using the SPIFFE identity standard. This allows for a high degree of customization and flexibility in determining what services can communicate with each other.

OpenShift Service Mesh also integrates with OpenShift's security features, such as network policies and role-based access control (RBAC). This means you can require that Service A is only accessible to Service B but not Service C, increasing security in modern dynamic microservices environments through isolation.

Credit: youtube.com, Deploy a Service Mesh, Run mTLS everywhere Sitram Iver (Jetstack) OpenShift Commons Gathering 2021

You can also use AuthorizationPolicies in Istio to control traffic to applications. Two key benefits of AuthorizationPolicies are that they allow both ALLOW and DENY rules, as well as distinction between HTTP GET and POST requests.

Here are some key benefits of OpenShift Service Mesh's security and access features:

  • Secures service-to-service communication via mutual TLS (mTLS)
  • Enforces fine-grained access control policies between services using the SPIFFE identity standard
  • Integrates with OpenShift's security features, such as network policies and RBAC
  • Allows for distinction between HTTP GET and POST requests with AuthorizationPolicies

By using OpenShift Service Mesh, you can experience a free 30-day trial and see optimization potential in 48 hours, all without requiring a credit card.

High Availability and Resilience

OpenShift Service Mesh provides considerable resilience capabilities, forming the foundation of many highly-available (HA) deployment patterns.

This is achieved through advanced traffic routing features, which can load balance traffic between multiple services based on various parameters, such as geographic origination, utilization, or even pre-determined ratios.

For example, Virtual Services can be configured to load balance traffic between multiple services, ensuring that traffic is distributed evenly and that no single service is overwhelmed.

OpenShift Service Mesh can also stretch across clusters, providing HA options for the cluster and mesh control planes as well.

Credit: youtube.com, Best practices for OpenShift high-availability deployment field experience

By default, Envoy will automatically retry if it gets a response with code 503, ensuring that requests are not lost due to temporary issues with a service.

This is demonstrated in Example 1, where a backend pod is forced to return a 503 response, and the Envoy automatically retries the request.

Envoy's Localcity Load Balancing feature is also enabled by default, which defines geographic locations by region, zone, and subzone. This ensures that requests are sent to pods within the same geographic location, improving performance and reducing latency.

For instance, in Example 1, it is shown that responses come from pods in the same AZ (us-east-2a) as the frontend.

Here's a summary of the resilience features provided by OpenShift Service Mesh:

  • Load balancing between multiple services
  • Automatic retries for 503 responses
  • Localcity Load Balancing for geographic location-based routing

These features provide a solid foundation for building highly-available and resilient applications on OpenShift.

Frequently Asked Questions

What is the difference between Red Hat service mesh and Istio?

Red Hat OpenShift Service Mesh supports multiple control planes, whereas Istio takes a single tenant approach. This difference is managed through a multitenant operator in Red Hat OpenShift Service Mesh.

Viola Morissette

Assigning Editor

Viola Morissette is a seasoned Assigning Editor with a passion for curating high-quality content. With a keen eye for detail and a knack for identifying emerging trends, she has successfully guided numerous articles to publication. Her expertise spans a wide range of topics, including technology and software tutorials, such as her work on "OneDrive Tutorials," where she expertly assigned and edited pieces that have resonated with readers worldwide.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.