OpenShift networking is built on top of the Kubernetes networking model, which provides a robust and flexible way to manage network communication between pods and services. This foundation allows for seamless integration with various network plugins.
The OpenShift networking model consists of three main components: pods, services, and network policies. Each of these components plays a crucial role in managing network traffic and ensuring secure communication between pods and services.
Pods are the basic execution units in OpenShift, and they can be thought of as virtual machines that contain one or more containers. Each pod is assigned a unique IP address, which allows it to communicate with other pods and services.
Services provide a way to expose an application to the outside world, while also providing a way to access the application from within the cluster. Services are typically represented by a DNS name or IP address, and they can be configured to use various network protocols.
Network policies control how network traffic flows between pods and services. They can be used to restrict access to sensitive data or to prevent unauthorized communication between pods and services.
Kubernetes Networking Basics
Kubernetes provides IP addresses for each Pod, which are used to communicate between pods at a very primitive level.
Pods can communicate directly with each other by addressing their IP addresses, but it's recommended to use Services instead, which provide a stable endpoint for containers and pods to connect to using DNS or environmental variables.
The pod network defaults to use the 10.128.0.0/14 IP address block, and each node in the cluster is assigned a /23 CIDR IP address range from the pod network block.
Pods can also communicate with each other through a Service, which has an IP address and usually a DNS name. This makes the solution less brittle if the Pods die or need to be restarted.
Here are some ways containers in the same Pod can communicate:
- Containers in the same Pod can connect using localhost; the other container exposes the port number.
- A container in a Pod can connect to another Pod using its IP address.
- A container can connect to another Pod through a Service.
Key Initial Considerations
OpenShift Container Platform is Red Hat's offering for on-premises private platform as a service (PaaS), based on the Origin open-source project and a Kubernetes distribution.
Kubernetes is the leading container orchestration, and OpenShift is derived from containers, with Kubernetes as the orchestration layer.
OpenShift Container Platform's foundation is based on Kubernetes, sharing some of the same networking technology along with enhancements.
The OpenShift Container Platform is built on top of Kubernetes and an SDN layer that provides an abstraction layer, creating a cluster-wide network.
This layer is essential for ensuring connectivity and communication between containers, pods, and services.
The OpenShift Networking provides a robust framework that enables efficient and secure networking within the OpenShift cluster.
OpenShift Networking leverages network namespaces to achieve isolation between different projects, or namespaces, on the platform.
Each project has its virtual network, ensuring that containers and pods within a project can communicate securely while isolated from other projects.
This isolation is crucial for maintaining security and preventing unauthorized access between projects.
Network namespaces also allow for efficient resource allocation and management within each project.
OpenShift Networking provides service discovery and load-balancing mechanisms to facilitate communication between various application components.
Services act as stable endpoints, allowing containers and pods to connect to them using DNS or environmental variables.
The built-in OpenShift load balancer ensures that traffic is distributed evenly across multiple instances of a service, improving scalability and reliability.
By using OpenShift Networking, administrators can define ingress and egress network policies to control network traffic flow within the platform.
Ingress policies specify rules for incoming traffic, allowing or denying access to specific services or pods.
Egress policies, on the other hand, regulate outgoing traffic from pods, enabling administrators to restrict access to external systems or services.
OpenShift Networking supports various network plugins and providers, allowing users to choose the networking solution that best fits their requirements.
Some popular options include Open vSwitch (OVS), Flannel, Calico, and Multus, which provide additional capabilities such as network isolation, advanced routing, and security features.
OpenShift provides robust monitoring and troubleshooting tools to help administrators track network performance and resolve issues.
The platform integrates with monitoring systems like Prometheus, allowing users to collect and analyze network metrics.
Additionally, OpenShift provides logging and debugging features to aid in identifying and resolving network-related problems.
Kubernetes Container Concept
A pod is the smallest compute unit that can be defined, deployed, and managed in OpenShift, and it's equivalent to a physical or virtual machine instance to a container.
Pods can have one or more containers deployed on one host, and containers within pods can share their local storage and networking.
Each pod has its own IP address, and pods can be removed after exiting or retained to allow access to container logs, depending on policy and exit code.
Pods are largely immutable in OpenShift, meaning they cannot be modified while running, and changes are implemented by terminating existing pods and recreating them with modified configurations.
Pods are expendable and do not maintain their state when recreated, so they should not be managed directly by users but by higher-level controllers.
A pod acts as a boundary layer for any cluster parameters directly affecting the container, and we run deployment against pods rather than containers.
Each pod gets an IP assigned when an app is deployed on the cluster, and each pod could have different applications, such as a web front end and a database.
Plugins
OpenShift supports several networking plugins, each with distinct features.
OVN-Kubernetes is a popular choice for its scalability and support for network policies, making it ideal for complex deployments.
Calico is known for its simplicity and efficiency, providing high-performance network connectivity and exceling in environments that require fine-grained network policies.
Flannel is a simpler alternative, suitable for users who prioritize ease of setup and basic networking functionality.
The choice of networking plugin can significantly impact the performance and security of your applications, so it's essential to select the right one for your organization's needs.
OVN-Kubernetes integrates seamlessly with Kubernetes, offering enhanced network isolation and simplified network management, making it a versatile choice for complex deployments.
Calico's support for IPv6 and integration with Kubernetes NetworkPolicy API makes it a popular choice in microservices architectures.
Flannel uses a flat network model, which is straightforward to configure and manage, making it a good choice for smaller clusters or those in the early stages of development.
Regular monitoring and management are crucial to maintaining optimal performance and security, and utilizing OpenShift's built-in tools and dashboards can greatly aid in this process.
Network & Communication
Pods can communicate with each other using their IP addresses, but it's recommended to use Services instead, as pods can be restarted frequently and addressing them directly by name or IP is brittle.
Each pod in a Kubernetes cluster is assigned an IP address from the pod network, which defaults to the 10.128.0.0/14 IP address block. This allows pods to communicate directly with each other by addressing their IP addresses.
Services act as internal load balancers and identify a set of replicated pods to proxy connections to them. They provide a consistent address for pods, allowing everything that depends on them to refer to them at a constant address.
Services are assigned an IP address and port mapping, which proxy to an appropriate backing pod when accessed. Using a label selector, a service can find all containers running on a specific port that provides a particular network service.
Pods can communicate with each other in three ways:
- Containers in the same pod can connect using localhost.
- A container in a pod can connect to another pod using its IP address.
- A container can connect to another pod through a service.
The primary CNI plugin, OpenShift CNI SDN Plugin, establishes the cluster-wide network and configures the overlay network using OVS. This allows pods to communicate with each other and access the internal network.
OpenShift provides a built-in DNS for services, allowing them to be reached by their DNS name as well as their IP address and port. This makes it easier for pods to communicate with each other and access services.
Network Configuration
OpenShift uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster.
The default configuration for the pod network's topology is a single flat network, where every pod in every project can communicate without restrictions.
OpenShift SDN uses a plugin architecture that provides different network topologies, allowing you to choose a plugin that matches your desired topology.
The OpenShift SDN establishes and maintains this pod network, configuring an overlay network using Open vSwitch (OVS).
New pods are created on a host, and the local OpenShift SDN allocates and assigns an IP address from the cluster network subnet assigned to the node.
Each node in the cluster is assigned a /23 CIDR IP address range from the pod network block, which means each application node can accommodate a maximum of 512 pods.
The primary CNI plugin, the essence of SDN for OpenShift, establishes the cluster-wide network and configures the overlay network using OVS.
Services are assigned an IP address and port pair that, when accessed, proxy to an appropriate backing pod, and several different service types can exist, including cluster-IP.
Unsecured routes are configured by default and are the easiest to configure, but secured routes offer security that keeps your connection private.
A secured route can be created using the create route command and optionally supplying certificates and keys in PEM-format files that must be generated and signed separately.
The pod network defaults to use the 10.128.0.0/14 IP address block, and OpenFlow manages how IP addresses are allocated to each application node.
OpenShift CNI SDN Plugin uses network policies using Openvswitch flow rules, which dictate which packets are allowed or denied.
Network policies can be used to secure your application traffic and restrict user permissions using role-based access control (RBAC).
Services consist of pods accessed through a single, fixed DNS name or IP address, and it's recommended that pods use Services instead of addressing them directly by name or IP.
Network Architecture
OpenShift's network architecture is built upon the foundation of Kubernetes, but with its own enhancements bringing better networking and security capabilities. The architecture consists of several key components, including the OpenShift SDN, network policies, and service mesh capabilities.
The OpenShift SDN abstracts the underlying network infrastructure, allowing developers to focus on application logic rather than network configurations. This abstraction enables greater flexibility and simplifies the deployment process.
Pod network defaults to use the 10.128.0.0/14 IP address block, with each node assigned a /23 CIDR IP address range from the pod network block. This allows for a maximum of 512 pods per application node.
Here are the key network settings available for Azure Red Hat OpenShift 4 clusters:
OpenShift networking leverages network namespaces to achieve isolation between different projects, or namespaces, on the platform. Each project has its virtual network, ensuring that containers and pods within a project can communicate securely while isolated from other projects.
Key Challenges: Data Centers
Traditional data center networks are struggling to keep up with today's applications, such as microservices and containers. They are too tightly coupled with infrastructure components, making it hard to adapt to the agile nature of containerized applications.
One of the main issues is the lack of flexibility in supporting today's applications, which are more agile than traditional monolith applications. This is due to the fixed network points and Layer 4 coupling.
Containers are short-lived and constantly spun down, which leads to a lot of changes in assets that support the application. These changes include IP addresses, firewalls, policies, and overlay networks that glue the connectivity.
These frequent changes can be overwhelming for traditional networks, which are relatively static and only change every few months.
The Architecture of
OpenShift's networking model is built upon the foundation of Kubernetes, but it takes things a step further with its own enhancements bringing better networking and security capabilities than the default Kubernetes model.
The architecture consists of several key components, including the OpenShift SDN (Software Defined Networking), network policies, and service mesh capabilities. The OpenShift SDN abstracts the underlying network infrastructure, allowing developers to focus on application logic rather than network configurations.
This abstraction enables greater flexibility and simplifies the deployment process. OpenShift's SDN allocates and assigns an IP address from the cluster network subnet assigned to the node and connects the veth interface to a port in the br0 switch.
The primary CNI plugin, the essence of SDN for OpenShift, establishes the cluster-wide network and configures the overlay network using the OVS. OpenShift's SDN also injects new OpenFlow entries into the OVSDB of br0 to route traffic addressed to the newly allocated IP Address to the correct OVS port.
OpenShift's network policies allow administrators to define how pods communicate with each other and with the outside world. By leveraging network policies, teams can enforce security boundaries, ensuring that only authorized traffic is allowed to flow within the cluster.
This is particularly crucial in multi-tenant environments where different teams might share the same OpenShift cluster but require isolated communication channels. OpenShift's network policies can be used to control network traffic flow within the platform.
Some key benefits of OpenShift's network policies include enforcing security boundaries, controlling network traffic flow, and isolating communication channels. By using network policies, teams can ensure that only authorized traffic is allowed to flow within the cluster.
Here are some key components of OpenShift's networking model:
- OpenShift SDN (Software Defined Networking)
- Network policies
- Service mesh capabilities
- OVS (Open Virtual Switch)
- CNI plugin
- OpenFlow entries
These components work together to provide a robust and secure networking framework for OpenShift.
Frequently Asked Questions
What is OpenShift Service network?
OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network for communication between pods. This unified network is essentially the OpenShift Service Network.
Sources
- https://network-insight.net/2022/07/18/openshift-sdn/
- https://learn.microsoft.com/en-us/azure/openshift/concepts-networking
- https://network-insight.net/2022/06/09/openshift-networking/
- https://docs.openshift.com/container-platform/4.9/networking/understanding-networking.html
- https://docs.openshift.com/container-platform/3.11/architecture/networking/sdn.html
Featured Images: pexels.com