OpenShift Architecture Overview and Key Features

Author

Reads 736

Modern data center corridor with server racks and computer equipment. Ideal for technology and IT concepts.
Credit: pexels.com, Modern data center corridor with server racks and computer equipment. Ideal for technology and IT concepts.

OpenShift is a container application platform that provides a flexible and scalable way to deploy and manage applications.

It's built on top of Kubernetes, which is an open-source container orchestration system.

OpenShift provides a managed platform for building, deploying, and managing applications in a containerized environment.

This means developers can focus on writing code rather than worrying about the underlying infrastructure.

Worth a look: Ocp Openshift

What Is?

OpenShift is a containerization and container orchestration platform developed by Red Hat, a prominent open-source software enterprise.

It's built on top of Kubernetes, an open-source container orchestration platform, which provides the foundation for its functionality.

OpenShift offers supplementary resources and functionalities to streamline the process of deploying, scaling, and administering containerized applications.

This platform is widely adopted and used by many organizations due to its robust features and scalability.

Its core functionality is based on the Kubernetes framework, which provides a robust and scalable way to manage containerized applications.

For more insights, see: Openshift Platform as a Service

Architecture Components

The control plane is the heart of an OpenShift cluster, and it's made up of several key components. Each control plane node runs a series of Kubernetes and OpenShift services.

Credit: youtube.com, OpenShift Architecture | techbeatly

A production environment requires at least three control plane nodes, and each node is responsible for managing the deployment of pods onto worker nodes.

Here are the key components of the control plane:

About Kubernetes

Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications.

It's a fairly simple concept: start with one or more worker nodes to run the container workloads and manage the deployment of those workloads from one or more master nodes.

Worker nodes run the container workloads, while master nodes manage the deployment of those workloads.

Kubernetes uses a deployment unit called a Pod to wrap containers and provide extra metadata with the container.

A Pod can group several containers in a single deployment entity, making it easier to manage and deploy applications.

You can create special kinds of assets, such as services and replication controllers, to define how containers connect to services and how many Pod Replicas are required to run at a time.

If this caught your attention, see: Openshift Deployment

Credit: youtube.com, Kubernetes Explained in 6 Minutes | k8s Architecture

Services allow containers to connect to the services they need, even if they don't have the specific IP addresses for the services.

Replication controllers indicate how many Pod Replicas are required to run at a time, allowing you to automatically scale your application to adapt to its current demand.

Here's a breakdown of the key components of Kubernetes:

  • Worker nodes: run the container workloads
  • Master nodes: manage the deployment of those workloads
  • Pods: wrap containers and provide extra metadata
  • Services: define how containers connect to services
  • Replication controllers: indicate how many Pod Replicas are required

Components

OpenShift has two main types of nodes: control plane and worker nodes.

Control plane nodes are the core of the cluster, responsible for managing and scheduling pods.

Worker nodes, on the other hand, make up the rest of the nodes in a cluster and are where the control plane schedules pods.

Each worker node communicates with the control plane to report its available capacity, allowing the control plane to determine where to schedule pods.

Like control nodes, worker nodes run the CRI-O and kubelet services.

Nodes

Nodes play a crucial role in a Kubernetes cluster. A node is either a master node or a worker node.

Curious to learn more? Check out: Single Node Openshift

Credit: youtube.com, Kubernetes Architecture Explained: Exploring etcd, Schedulers, Managers, and Node Components

A master node oversees the cluster and determines the optimal deployment locations for containers. It assumes the role of the cluster's manager, ensuring that resources are allocated efficiently.

A worker node, on the other hand, executes containers and oversees their lifecycle. It's the node where the control plane schedules pods.

In a cluster, you can have multiple worker nodes, but you need at least three control plane nodes for a production environment. Each control plane node runs a series of Kubernetes and OpenShift services.

Here's a breakdown of the different types of nodes:

  • Master Node: Oversees the cluster and determines optimal deployment locations.
  • Worker Node: Executes containers and oversees their lifecycle.
  • Control Plane Node: Manages the deployment of pods onto worker nodes.

Each node type has its own responsibilities, working together to ensure the smooth operation of the cluster.

Pods

Pods are the most compact and deployable entities within the OpenShift and Kubernetes platforms, and they have the capability to encompass multiple containers that operate within a shared network namespace and storage volume.

A pod represents a single logical host in the cluster, and it's the basic execution unit for applications running on Kubernetes.

Intriguing read: Kubernetes on Azure

Credit: youtube.com, Kubernetes Components explained! Pods, Services, Secrets, ConfigMap | Kubernetes Tutorial 14

Pods are lightweight and ephemeral, meaning they can be created and destroyed as needed, which makes them ideal for applications that require a high degree of flexibility and scalability.

In Kubernetes, pods are the building blocks of applications, and they're typically composed of one or more containers that work together to provide a specific service or function.

The pod represents the most compact and deployable entity within the OpenShift and Kubernetes platforms.

Curious to learn more? Check out: Kubernetes Vs. Openshift

Storage and Networking

Storage and networking are crucial components of OpenShift architecture. OpenShift supports persistent storage requirements through the vSphere Cloud Provider and its corresponding volume plugin.

Persistent storage offerings are exposed as VMFS, NFS, or vSAN datastores, and enterprise-grade features like Storage Policy Based Management (SPBM) provide automated provisioning and management. This enables customers to guarantee QoS requested by their business-critical applications and enforce SLAs.

StorageClasses allow the creation of PersistentVolumes on-demand without having to create storage and mount it into OpenShift nodes upfront. Depending on the backend storage used, the datastores can be either vSAN, VMFS, or NFS.

Credit: youtube.com, Building a fast and scalable architecture w/ Openshift Container Platform + Red Hat Gluster Storage

Here's a brief overview of each storage option:

  • vSAN powers hyperconverged infrastructure solutions, providing excellent performance and reliability.
  • VMFS is a clustered file system that allows virtualization to scale beyond a single node for multiple VMware vSphere servers.
  • NFS is a distributed file protocol to access storage over the network like local storage.

OpenShift also handles networking through software-defined networking (SDN) instead of traditional physical routers, load-balancers, and firewalls. A set of network operators in the cluster handle things like routing traffic, load-balancing, and network policies. Each pod in a cluster is assigned an internal IP address so pods within the same node can talk to each other.

Recommended read: Openshift Networking

Services

Services play a crucial role in Kubernetes and OpenShift, and understanding how they work is essential for efficient deployment and management of applications.

In Kubernetes, services are responsible for defining a collection of pods and establishing the means to access them. They offer a network abstraction layer and implement load balancing mechanisms to evenly distribute incoming traffic among the pods.

The Kubernetes service provides a network abstraction layer, which means it allows you to access your pods without knowing their specific IP addresses or ports. This makes it easier to manage and scale your applications.

Credit: youtube.com, Server, Storage, and Networking Solutions For Your Datacenter

Kubernetes services can be managed by various components, including the kubeapi-server, etcd, kube-controller-manager, and kube-scheduler. These components work together to ensure that your services are running smoothly and efficiently.

Here's a brief overview of the components involved in managing Kubernetes services:

In OpenShift, services are also an essential component, and they work similarly to Kubernetes services. They provide a network abstraction layer and implement load balancing mechanisms to distribute incoming traffic among pods.

OpenShift services are managed by various components, including the OpenShift API server, OpenShift controller manager, OpenShift OAuth API server, and OpenShift OAuth Server. These components work together to ensure that your services are running smoothly and efficiently.

Each of these components plays a vital role in managing OpenShift services, and understanding how they work can help you troubleshoot and optimize your applications.

Routes (Ingress)

OpenShift is responsible for managing network configurations, which includes overlay networks for container communication, services, and routing.

Credit: youtube.com, Kubernetes Ingress networking

One key aspect of network configuration is the management of routes, also known as ingress routes. These routes enable external traffic to reach services within the OpenShift cluster.

Routes serve the purpose of exposing applications to the internet and managing the routing of HTTP and HTTPS traffic. This is crucial for accessing applications from outside the cluster.

To better understand the role of routes, consider this: without routes, external traffic wouldn't be able to reach services within the OpenShift cluster, making them inaccessible to users.

Here are some key characteristics of routes in OpenShift:

  • Enable external traffic to reach services within the OpenShift cluster
  • Expose applications to the internet
  • Manage the routing of HTTP and HTTPS traffic

Storage

Storage is a crucial aspect of any application, and OpenShift has some amazing features to handle storage needs. OpenShift offers various tools and features to effectively handle storage needs, including network-attached storage, storage classes, and Persistent Volume Claims (PVCs).

You can store data in OpenShift environments using vSphere datastores, which can be backed by VMware vSAN, vSphere Datastore, or NFS. vSAN provides excellent performance and reliability, while VMFS allows for shared access to a pool of storage, increasing resource utilization.

A fresh viewpoint: Openshift Storage

Credit: youtube.com, NAS vs SAN - Network Attached Storage vs Storage Area Network

A combination of Storage Policy Based Management (SPBM) and vSphere datastores provides a uniform interface for storing persistent data. This abstraction hides intricate storage details, making it easier to manage storage.

Here are some storage options you can use in OpenShift:

  • vSAN: Provides excellent performance and reliability.
  • VMFS: Allows for shared access to a pool of storage, increasing resource utilization.
  • NFS: A distributed file protocol to access storage over the network.

StorageClasses in Kubernetes allow for the creation of PersistentVolumes on-demand, without having to create storage and mount it into OpenShift nodes upfront. This makes it easier to manage storage and ensure that applications have the storage they need.

Broaden your view: Block Storage for Openshift

Network (SDN)

Network (SDN) is a game-changer for OpenShift customers. NSX-T Data Center has helped simplify networking and network-based security for several years with the NSX Container Plug-in (NCP).

NCP runs on each OpenShift node and connects the networking interface of a container to the NSX overlay network. It monitors container life cycle events and manages networking resources such as load balancers, logical ports, switches, routers, and security groups for the containers by calling the NSX API.

Broaden your view: Openshift Security

Credit: youtube.com, What is software-defined networking (SDN)?

Here are some key features of NCP:

  • Automatically creates an NSX-T logical topology for an OpenShift cluster, and creates a separate logical network for each OpenShift namespace.
  • Connects OpenShift pods to the logical network, and allocates IP and MAC addresses.
  • Supports network address translation (NAT) and allocates a separate SNAT IP for each OpenShift namespace.
  • Implements OpenShift network policies with NSX-T distributed firewall.
  • Implements OpenShift Router with NSX-T layer 7 load balancer.
  • Creates tags on the NSX-T logical switch port for the namespace, pod name, and labels of a pod, and allows the administrator to define NSX-T security groups and policies based on the tags.

By using NCP, OpenShift customers can simplify their networking and network-based security, and focus on developing and deploying their applications.

Frequently Asked Questions

Is OpenShift a PaaS or IAAS?

OpenShift is a platform-as-a-service (PaaS) solution, not infrastructure-as-a-service (IaaS). It streamlines application development, deployment, and management through container orchestration.

Patricia Dach

Junior Copy Editor

Patricia Dach is a meticulous and detail-oriented Copy Editor with a passion for refining written content. With a keen eye for grammar and syntax, she ensures that articles are polished and error-free. Her expertise spans a range of topics, from technology to lifestyle, and she is well-versed in various style guides.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.