OpenShift Distribution is a cloud-native platform for building, deploying, and managing containerized applications. It's built on top of Kubernetes, which is an open-source container orchestration system.
The architecture of OpenShift Distribution is designed to be highly scalable and flexible, making it suitable for a wide range of applications and use cases. OpenShift Distribution uses a master-slave architecture, where the master node is responsible for managing the cluster and the slave nodes run the actual application workloads.
To deploy OpenShift Distribution, you'll need to create a cluster with at least one master node and one or more worker nodes. Each node in the cluster can be either a physical machine or a virtual machine, and the choice of node type will depend on the specific requirements of your application.
Architecture and Components
OpenShift's main node is responsible for cluster management and worker node management. It performs key tasks such as receiving and authenticating management requests through the API, storing state and information related to the environment and applications, making pod placement decisions, and monitoring pod health and scaling pods up and down.
The main node also ensures the security of the cluster by encrypting and authenticating requests using SSL. Each pod and service running in the cluster is assigned a unique IP address, which is accessible by other pods and services but not by external clients.
Here are the main tasks performed by the main node:
- API and authentication
- Datastore
- Scheduler
- Health checks and scaling
Worker Nodes
Worker nodes are the backbone of your OpenShift Container Platform cluster, responsible for running pods that contain your applications and services. By default, data stored in containers is lost when the container shuts down, because containers are temporary entities.
To avoid this, you can use persistent storage for databases or stateful services. This ensures that your data is safe and can be recovered even if the container restarts.
Each pod and service running in the cluster is assigned a unique IP address, making it accessible by other pods and services running nearby, but not by external clients. This is a key aspect of how OpenShift Container Platform manages network traffic and access.
Here's a quick rundown of what you can expect from a worker node:
- Runs pods, which are the smallest unit that can be defined, deployed, and managed.
- Each pod contains one or more containers, which hold applications and their dependencies.
- Persistent storage can be used to avoid data loss when containers shut down.
Persistent Storage
Persistent storage is a must-have for running stateful applications in OpenShift. It ensures that data in a container is kept in a persistent storage volume attached to the container.
If you restart or delete the container, the stored data will not be lost. This is a crucial aspect of persistent storage.
You can configure storage for the Collector, Ingester, and Query services under spec.storage. This is where you define the storage options for your distributed tracing platform.
For production environments, it's recommended to use Elasticsearch for persistent storage. Memory storage is only suitable for development, testing, and proof of concept environments, as the data does not persist if the pod is shut down.
Here are the storage parameters used by the Red Hat OpenShift distributed tracing platform Operator:
Elasticsearch storage also requires configuration of the index cleaner parameters.
Networking
Networking is a crucial aspect of any container orchestration system, and OpenShift has a robust solution in place. OpenShift's networking solution is built around Open vSwitch, which provides three plug-ins to choose from.
Kubernetes, on the other hand, does not have a native networking solution. Instead, it relies on network plug-ins to handle networking tasks. This can be a bit more complicated for users who are new to container orchestration.
OpenShift's networking solution is designed to be easy to use and configure. It provides a straightforward way to manage network traffic and ensure that containers can communicate with each other.
Here are some key features of OpenShift's networking solution:
- Open vSwitch provides a robust and scalable networking solution
- Three plug-ins are available to choose from, each with its own strengths and weaknesses
Kubernetes and Service Mesh
Red Hat OpenShift Service Mesh provides a platform for behavioral insight and operational control of networked microservices in a service mesh. It lets you connect, secure, and monitor microservices within an OpenShift Container Platform environment.
OpenShift Service Mesh adds communication capabilities to existing distributed applications without changing service code. You can use the service mesh control plane features to configure and manage your service mesh. Red Hat OpenShift Service Mesh capabilities include service discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring for your existing services.
Gloo Platform adds advanced features to Red Hat OpenShift, including routing, security, and observability capabilities that many OpenShift customers choose to use in place of the default OpenShift technologies.
Service Mesh
Service Mesh is a crucial component of modern cloud-native applications, and it's great to see it's gaining popularity. OpenShift Service Mesh provides a platform for behavioral insight and operational control of networked microservices in a service mesh.
It lets you connect, secure, and monitor microservices within an OpenShift Container Platform environment. You can use the service mesh control plane features to configure and manage your service mesh.
Red Hat OpenShift Service Mesh adds communication capabilities to existing distributed applications without changing service code. This is a game-changer for developers who want to modernize their applications without disrupting their existing infrastructure.
Service mesh capabilities include service discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring for your existing services. These features help ensure your applications are resilient, scalable, and secure.
Gloo Platform adds advanced features to Red Hat OpenShift, including Istio service mesh and Kubernetes-native API-Gateway (Gloo Edge and Gloo Mesh). Many OpenShift customers choose to use the Solo Gloo products in place of the default OpenShift technologies due to their advanced capabilities for routing, security, and observability.
Kubernetes
Kubernetes is an open source container framework developed by Google. It's designed to make managing workloads and services easier, automating containerized applications' deployment, operations, and scaling.
Kubernetes provides portable containers that can be run on any infrastructure, making it a great choice for developers who need flexibility. With Kubernetes, developers can automate processes, balance containers, and orchestrate storage.
Kubernetes has a large developer community, which is a major advantage. This community support means that there are many resources available to help developers get started and troubleshoot issues.
Some key benefits of using Kubernetes include its scalable architecture, which allows for fast and large-scale development, management, and deployment. It also has the same license as OpenShift, the Apache License 2.0.
The main differences between Kubernetes and OpenShift are the size of their communities and the languages they support. Kubernetes has a larger community and supports multiple languages and frameworks, while OpenShift has a smaller community mostly limited to Red Hat.
Ingress and Services
In an OpenShift cluster, the service layer defines pods and their access policies, providing persistent IP addresses and hostnames to pods, and allowing applications to connect each other.
The service layer also supports simple internal load balancing to distribute work across application components.
There are two main types of nodes in an OpenShift cluster: master nodes and worker nodes, with applications residing on worker nodes.
Worker nodes are where all services run and can be virtual or physical.
The Ingress Operator is a component that implements the IngressController API and allows external access to the OpenShift Container Platform cluster service.
It deploys one or more HAProxy-based ingress controllers to handle routing, making services accessible to external clients.
Ingress
In OpenShift, the Ingress Operator is a crucial component that allows external access to the cluster service. It implements the IngressController API and deploys HAProxy-based ingress controllers to handle routing.
The Ingress Operator makes services accessible to external clients, enabling them to connect to the OpenShift cluster. This is a game-changer for applications that need to be accessed from outside the cluster.
A cluster can have one or more Ingress Controllers, each responsible for handling routing for a specific set of services. This allows for high availability and scalability.
In an OpenShift cluster, applications reside on worker nodes, and services run on these nodes as well. The Ingress Operator ensures that services are accessible to external clients, even if they're running on worker nodes.
Worker nodes can be virtual or physical, and a cluster can have multiple worker nodes to distribute the workload. The Ingress Operator takes care of routing traffic to the correct worker node, making it easy to scale your application.
Services
Services are the backbone of an OpenShift cluster, defining pods and their access policies. They provide persistent IP addresses and hostnames to pods, making it easy for applications to connect with each other.
In an OpenShift cluster, there are two main types of nodes: master nodes and worker nodes. Applications reside on worker nodes, which can be virtual or physical.
Worker nodes are where all services run, and can be scaled up or down as needed. This allows for efficient use of resources and ensures that applications are always running smoothly.
Red Hat OpenShift Service Mesh adds a layer of communication capabilities to existing distributed applications without changing the service code. This makes it easy to connect, secure, and monitor microservices within an OpenShift Container Platform environment.
The service mesh control plane features allow for configuration and management of the service mesh, including service discovery, load balancing, and service-to-service authentication.
Sampling Configuration Options
Sampling Configuration Options are crucial for distributed tracing in OpenShift. The platform Operator allows you to define sampling strategies that will be supplied to tracers.
There are two types of samplers supported by the distributed tracing platform libraries: Probabilistic and Rate Limiting. Probabilistic samplers make a random sampling decision with a probability equal to the value of the sampling.param property.
The sampling.param property can be set to a decimal or integer value, such as 0.1 or 1, which determines the probability of sampling. For example, setting sampling.param=0.1 samples approximately 1 in 10 traces.
Rate Limiting samplers use a leaky bucket rate limiter to ensure that traces are sampled with a certain constant rate. This can be set using the sampling.param property, such as sampling.param=2.0, which samples requests with the rate of 2 traces per second.
The default sampling strategy is probabilistic, with a 0.1% probability for all services if no configuration is provided. However, you can configure a different sampling strategy by setting the type parameter to either "probabilistic" or "ratelimiting".
Pipelines and Cluster Manager
OpenShift Pipelines is a powerful tool for automating deployments across multiple platforms, abstracting low-level implementation details using Tekton building blocks. It's designed for distributed teams working on microservices-based architectures.
With OpenShift Pipelines, you can build images using Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko, which are portable to any Kubernetes platform. This means you can create consistent and reliable builds across different environments.
OpenShift Cluster Manager is a managed service that allows you to install, modify, operate, and upgrade Red Hat OpenShift clusters from a single dashboard. It guides you through the installation of OpenShift Container Platform, Red Hat OpenShift Service on AWS (ROSA), and OpenShift Dedicated clusters.
Here are some key benefits of OpenShift Cluster Manager:
- Guides you through the installation of OpenShift clusters
- Manages self-installed OpenShift Container Platform clusters, ROSA, and OpenShift Dedicated clusters
In terms of flexibility, OpenShift templates are less user-friendly compared to Kubernetes Helm templates. However, OpenShift Cluster Manager still provides a convenient way to manage your clusters.
Pipelines
Pipelines are the backbone of modern software development, and OpenShift Pipelines is a powerful tool for automating deployments across multiple platforms. It's a cloud-native CI/CD solution powered by Kubernetes resources.
OpenShift Pipelines lets you automate deployments with a serverless CI/CD system that runs pipelines with all necessary dependencies in isolated containers. This means you can focus on writing code, not worrying about the underlying infrastructure.
Pipelines defined using standard CI/CD concepts are easily extensible and integrate with existing Kubernetes tools. This flexibility is a major advantage in today's fast-paced development environment.
You can build images using Kubernetes tools like Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko, which are portable to any Kubernetes platform. This means you can use the same tools across different projects and environments.
Here are the key capabilities of OpenShift Pipelines:
- A serverless CI/CD system that runs pipelines with all necessary dependencies in isolated containers.
- Pipelines defined using standard CI/CD concepts, which are easily extensible and integrate with existing Kubernetes tools.
- Ability to build images using Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko, which are portable to any Kubernetes platform.
Cluster Manager
Cluster Manager is a powerful tool that simplifies the process of managing Red Hat OpenShift clusters. It allows you to work with all clusters in your organization from a single dashboard.
With Cluster Manager, you can easily install, modify, operate, and upgrade Red Hat OpenShift clusters. This includes clusters installed using OpenShift Container Platform, Red Hat OpenShift Service on AWS (ROSA), and OpenShift Dedicated.
One of the key benefits of Cluster Manager is its ability to guide you through the installation process. This is especially helpful for those who are new to OpenShift or need a refresher on the installation process.
Cluster Manager also manages self-installed OpenShift Container Platform clusters, as well as ROSA and OpenShift Dedicated clusters.
Here are some key features of Cluster Manager:
- Guides you through the installation of OpenShift Container Platform, ROSA, and OpenShift Dedicated clusters.
- Manages self-installed OpenShift Container Platform clusters.
- Manages ROSA and OpenShift Dedicated clusters.
Elasticsearch Auto-Provisioning
Elasticsearch Auto-Provisioning is a game-changer for large-scale deployments.
With Elasticsearch Auto-Provisioning, you can automatically create and manage clusters based on predefined templates.
This feature is particularly useful for cloud environments where resources are dynamic and can change rapidly.
Elasticsearch Auto-Provisioning supports multiple templates for different use cases, such as development, testing, and production environments.
You can also define custom templates to fit your specific needs and requirements.
Auto-Provisioning can create multiple clusters in parallel, significantly reducing the time it takes to set up a new cluster.
This feature is tightly integrated with the Cluster Manager, allowing for seamless management of your Elasticsearch clusters.
Sources
- https://www.solo.io/topics/openshift
- https://www.dynatrace.com/technologies/redhat-monitoring/openshift-monitoring/
- https://www.puppeteers.net/blog/openshift-versions-all-you-hopefully-need-to-know/
- https://github.com/okd-project/okd
- https://docs.openshift.com/container-platform/4.10/distr_tracing/distr_tracing_install/distr-tracing-deploying-jaeger.html
Featured Images: pexels.com