openshift bare metal deployment and management

Author

Reads 248

Computer server in data center room
Credit: pexels.com, Computer server in data center room

Deploying and managing OpenShift on bare metal is a straightforward process, thanks to the streamlined architecture of the OpenShift Bare Metal Operator.

The OpenShift Bare Metal Operator automates the deployment and management of OpenShift on bare metal infrastructure, eliminating the need for manual configuration and setup.

This automation significantly reduces the time and effort required to get started with OpenShift on bare metal, allowing you to focus on more critical tasks.

With the OpenShift Bare Metal Operator, you can easily manage your bare metal infrastructure from a single interface, making it easier to scale and maintain your OpenShift environment.

7.4.1 Prerequisites

To install OpenShift Container Platform on bare metal, you need to review the installation and update processes.

You should also read the documentation on selecting a cluster installation method and preparing it for users.

If you use a firewall, you need to configure it to allow the sites that your cluster requires access to. Be sure to also review this site list if you are configuring a proxy.

Credit: youtube.com, Step-by-Step OpenShift 4.x Deployment Process - Prerequisites – Download OpenShift Installer

You need to create a registry on your mirror host and obtain the imageContentSources data for your version of OpenShift Container Platform.

Because the installation media is on the mirror host, you can use that computer to complete all installation steps.

You must provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.

Container Internet Access

You'll need internet access to install an OpenShift Container Platform cluster on bare metal infrastructure. This is because you'll need to access the OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management.

The cluster also requires internet access to access Quay.io and obtain the packages needed for installation. Additionally, you'll need internet access to obtain the packages required for cluster updates.

If your cluster can't have direct internet access, you can perform a restricted network installation on some types of infrastructure. This involves downloading the required content and using it to populate a mirror registry with the installation packages.

Here are some types of infrastructure that support restricted network installations:

  • Installing a user-provisioned bare metal cluster on a restricted network

Installation Process

Credit: youtube.com, How to install OpenShift 4 on Bare Metal - User Provisioned Infrastructure (UPI)

To install an OpenShift cluster on bare metal, you'll need to configure the OpenShift installer, ACI infra and CNI, and prepare custom network configuration for OpenShift nodes. At least two network interfaces are necessary for bare metal nodes, one for the node network and the other for the pod network.

The installation process involves several steps, including creating the OpenShift install manifests, Fedora CoreOS ignition files, and iPXE boot files. This is typically done using a script like `labcli --deploy-c`, which does a lot of work for you.

Here are the key steps in the installation process:

  • Configuring the OpenShift Installer
  • Configuring ACI Infra and CNI
  • Preparing Custom Network Configuration for OpenShift Nodes

These steps are crucial in setting up a successful OpenShift cluster on bare metal.

Downloading the Binary

You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface.

The OpenShift CLI is available for Linux, Windows, or macOS.

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.11.

You'll need to download and install the new version of oc to ensure compatibility.

Here are the supported operating systems for installing the OpenShift CLI:

Disk Partitioning

Credit: youtube.com, Linux Crash Course - Formatting & Mounting Storage Volumes

Disk partitioning is a crucial step in the installation process. It allows you to divide your hard drive into separate sections, each containing its own operating system, programs, and data.

You can choose from various partitioning schemes, such as primary and extended partitions, or use a hybrid approach like MBR and GPT.

Initial Operator

After the control plane initializes, you must immediately configure some Operators so that they all become available. This is a crucial step in the installation process.

You can check the status of your cluster components by running the command `$ watch -n5 oc get clusteroperators`. This will show you the current status of each Operator, including whether it's available, progressing, or degraded.

Here are the Operators that are not available by default:

  1. Authentication
  2. Baremetal
  3. Cloud-credential
  4. Cluster-autoscaler
  5. Config-operator
  6. Console
  7. Csi-snapshot-controller
  8. Dns
  9. Etcd
  10. Image-registry
  11. Ingress
  12. Insights
  13. Kube-apiserver
  14. Kube-controller-manager
  15. Kube-scheduler
  16. Kube-storage-version-migrator
  17. Machine-api
  18. Machine-approver
  19. Machine-config
  20. Marketplace
  21. Monitoring
  22. Network
  23. Node-tuning
  24. Openshift-apiserver
  25. Openshift-controller-manager
  26. Openshift-samples
  27. Operator-lifecycle-manager
  28. Operator-lifecycle-manager-catalog
  29. Operator-lifecycle-manager-packageserver
  30. Service-ca
  31. Storage

Once you've identified the Operators that are not available, you can configure them by following the instructions provided by the OpenShift documentation.

Deploy with VM

To deploy with Virtual Media on the baremetal network, you need to enable the provisioning network and make a few tweaks to your configuration.

Credit: youtube.com, how to Install Deploy Virtual Machine on VMWare ESXi Step by Step | Create VM Machine and Deploy OS

First, edit the provisioning custom resource (CR) to enable deploying with Virtual Media on the baremetal network. This involves adding `virtualMediaViaExternalNetwork: true` to the provisioning CR.

Next, if the image URL exists, edit the machineset to use the API VIP address. This step applies only to clusters installed in versions 4.9 or earlier. To do this, edit the checksum URL and the URL to use the API VIP address.

Here's a summary of the steps:

  1. Add `virtualMediaViaExternalNetwork: true` to the provisioning CR.
  2. Edit the checksum URL and the URL to use the API VIP address.

Supporting 4.14

To support OpenShift 4.14 on a Bare Metal Server, you'll need at least two network interfaces. One interface is required for the node network, and the other for the pod network.

The design separates OpenShift node traffic from the pod traffic. This separation can be achieved through two options.

To separate the node and pod networks, you can use separate physical interfaces for each network. This means the first interface is used for the node network, and the second one is used for the pod network, which also carries Cisco ACI control plane traffic.

Credit: youtube.com, Openshift 4.12 Installation | Complete Openshift Cluster Installation | English

Alternatively, you can configure the node network and pod network as VLAN subinterfaces of either bond0 or a physical NIC. This approach allows you to configure additional VLANs for management purposes or use the node network for management.

To implement either of these options, you'll need to consider the server provisioning method, such as PXE or manual ISO boot, as the design might depend on it.

Here are the two options for network configuration:

  • Separate physical interface for node and infra networks
  • Single Sub interface for both node and infra networks

Networking

Networking is a crucial aspect of OpenShift Bare Metal, and it's essential to get it right from the start. DHCP can be used to set cluster node hostnames, which can bypass manual DNS record name configuration errors in environments with a DNS split-horizon implementation.

Setting up DNS is also critical, and you'll need to configure A and PTR records for name resolution. A record 1 provides name resolution for the Kubernetes API, referring to the IP address of the API load balancer, while record 2 provides name resolution for the Kubernetes API and is used for internal cluster communications.

Credit: youtube.com, [Demo] OpenShift 4 installation using UPI(Bare metal) with KVM - BTBS

Here are some key DNS records to consider:

Validating DNS resolution is also essential before installing OpenShift Container Platform on user-provisioned infrastructure. You can run DNS lookups against the record names of the Kubernetes API, wildcard routes, and cluster nodes to ensure that the IP addresses correspond to the correct components.

Setting Hostnames via DHCP

Setting hostnames via DHCP is a reliable way to ensure your cluster nodes have accurate and consistent hostnames. This method bypasses any manual DNS record name configuration errors in environments with a DNS split-horizon implementation.

On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager, and DHCP is the default method for obtaining hostnames.

Using DHCP to set hostnames can save time, as it eliminates the need for reverse DNS lookups, which can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar.

Setting hostnames through DHCP can be especially useful in environments where DNS record name configuration errors are common.

Ntp

Credit: youtube.com, Network Time Protocol (NTP) - Computerphile

NTP is a crucial protocol for keeping your systems in sync with the correct time. By default, OpenShift Container Platform clusters use a public NTP server, but you can configure it to use a local enterprise NTP server if needed.

If you're deploying your cluster in a disconnected network, you'll need to configure the cluster to use a specific time server. This is where things get interesting, as the chrony time service on Red Hat Enterprise Linux CoreOS (RHCOS) machines can sync with NTP servers provided by a DHCP server.

DHCP servers can provide NTP server information to RHCOS machines, which then sync their clocks with the NTP servers. This is a convenient feature that saves you from having to manually configure the time service.

Here are some key points to keep in mind when working with NTP:

  • Configuring chrony time service is the way to go if you need to use a local enterprise NTP server or if your cluster is being deployed in a disconnected network.
  • Check out the documentation for Configuring chrony time service for more information.

Networking

Networking is a critical component of OpenShift Container Platform, and understanding the requirements for your user-provisioned infrastructure is essential for a successful deployment.

Credit: youtube.com, What is Networking? - Networking Basics

To validate your DNS configuration, you must ensure that the required DNS records are in place. This includes A and PTR records for name resolution and reverse name resolution.

You can validate your DNS configuration by running DNS lookups against the record names of the Kubernetes API, wildcard routes, and cluster nodes. This will help you ensure that the IP addresses contained in the responses correspond to the correct components.

Load balancing requirements for user-provisioned infrastructure are also crucial. You must provision the API and application ingress load balancing infrastructure before installing OpenShift Container Platform. This includes configuring the load balancer to handle traffic for the Kubernetes API and application ingress.

Here are the specific load balancing requirements:

The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool.

In a three-node cluster deployment with zero compute nodes, the Ingress Controller pods run on the control plane nodes. You must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.

Credit: youtube.com, The Fundamentals Of Computer Networking

To ensure that the load balancer is working correctly, you can check that the haproxy process is listening on ports 6443, 22623, 443, and 80 by running netstat -nltupe on the HAProxy node.

Remember to configure session persistence for the application Ingress load balancer, but not for the API load balancer, as this can cause performance issues.

Enabling Multipathing with Kernel Arguments

Enabling multipathing with kernel arguments on RHCOS is a must-have for stronger resilience to hardware failure and higher host availability. RHCOS supports multipathing on the primary disk, allowing for stronger resilience to hardware failure to achieve higher host availability.

You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While post-installation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended.

In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. On IBM Z and LinuxONE, you can enable multipathing only if you configured your cluster for it during installation.

Credit: youtube.com, LPC2019 - Multipath TCP Upstreaming

To enable multipath and start the multipathd daemon, run the following command: $mpathconf --enable&& systemctl start multipathd.service. This command is the first step in enabling multipathing with kernel arguments.

Here's a step-by-step guide to enabling multipathing with kernel arguments:

  1. To enable multipath and start the multipathd daemon, run the following command: $mpathconf --enable&& systemctl start multipathd.service
  2. Append the kernel arguments by invoking the coreos-installer program:
  3. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): $oc debug node/ip-10-0-141-105.ec2.internal

You should see the added kernel arguments, such as rd.multipath=default root=/dev/disk/by-label/dm-mpath-root. This indicates that multipathing is enabled and working correctly.

Advanced Topics

You can customize your network configuration by specifying advanced network configuration for your cluster. This is only possible before you install the cluster.

To do this, you'll need to create a stub manifest file for the advanced network configuration. This file should be named cluster-network-03-config.yml and should be placed in the manifests/ directory of your installation directory.

You can specify the advanced network configuration in the cluster-network-03-config.yml file. For example, you can specify a different VXLAN port for the OpenShift SDN network provider by adding the following code to the file:

Credit: youtube.com, Ask an OpenShift Admin (Ep 45): Bare metal deployments

apiVersion: operator.openshift.io/v1

kind: Network

metadata:

name: cluster

spec:

defaultNetwork:

openshiftSDNConfig:

vxlanPort: 4800

Alternatively, you can enable IPsec for the OVN-Kubernetes network provider by adding the following code to the file:

apiVersion: operator.openshift.io/v1

kind: Network

metadata:

name: cluster

spec:

defaultNetwork:

ovnKubernetesConfig:

ipsecConfig: {}

Note that you can also back up the manifests/cluster-network-03-config.yml file if you want to preserve your changes.

PXE Advanced Options

PXE advanced options offer a range of possibilities for configuring your OpenShift Container Platform nodes.

To set up static IP addresses or configure special settings, such as bonding, you can pass special kernel parameters when you boot the live installer. This technique allows you to customize the networking configuration for your PXE installation.

You can also use a machine config to copy networking files to the installed system. This method provides a way to configure networking from a machine config, which can be useful for complex networking setups.

For PXE or iPXE installations, you can refer to the "Advanced RHCOS installation reference" tables for more information on available options.

Credit: youtube.com, PXE Boot Provision Explained - Advanced Topics

Here are some options for configuring PXE installations:

Using a machine config to copy networking files to the installed system is a powerful option for PXE installations. It allows you to configure networking from a machine config, which can be useful for complex networking setups.

By using one of these options, you can customize the networking configuration for your PXE installation and set up static IP addresses or configure special settings, such as bonding.

Advanced Reference

To customize your network configuration for an OpenShift Container Platform cluster, you can use advanced network configuration. This should be done before installing the cluster.

You can specify a different VXLAN port for the OpenShift SDN network provider. This is done by modifying the cluster-network-03-config.yml file in the manifests/ directory.

The default VXLAN port is 4789, but you can change it to any available port. For example, you can specify vxlanPort: 4800 in the cluster-network-03-config.yml file.

Computer server in data center room
Credit: pexels.com, Computer server in data center room

You can also enable IPsec for the OVN-Kubernetes network provider. This is done by adding the ipsecConfig: {} section to the cluster-network-03-config.yml file.

You can back up the manifests/cluster-network-03-config.yml file if you want to preserve your changes. This is optional, but it's a good idea to keep a record of your customizations.

Here's a summary of the steps to customize your network configuration:

Replacing the Control Plane

Replacing the Control Plane can be a daunting task, especially when dealing with complex systems.

One key consideration is the need for a scalable and fault-tolerant architecture, as discussed in the "Scalability and High Availability" section. This ensures that the system can handle increased traffic and maintain performance even in the event of failures.

A good example of this is the use of load balancers to distribute traffic across multiple nodes, as seen in the "Load Balancing" section. This helps to prevent any single point of failure and ensures that the system remains available.

Credit: youtube.com, Network as Code Advanced Topics (Video 3 of 3)

Replacing the control plane also requires careful consideration of network architecture, as discussed in the "Network Architecture" section. This includes designing a network that can handle the increased traffic and provide the necessary connectivity for the new control plane.

In some cases, it may be necessary to reconfigure existing network devices, such as routers and switches, to accommodate the new control plane. This can be a complex process, but it's essential for ensuring that the system functions correctly.

Specifying Advanced

You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. This can be done by modifying the OpenShift Container Platform manifest files created by the installation program, but customizing your network configuration by modifying these files is not supported.

To specify advanced network configuration, you can create a stub manifest file for the cluster network configuration in the manifests directory. This file should be named cluster-network-03-config.yml.

Credit: youtube.com, SEM Episode 6: Advanced Topics

The cluster-network-03-config.yml file should contain the advanced network configuration for your cluster, such as specifying a different VXLAN port for the OpenShift SDN network provider or enabling IPsec for the OVN-Kubernetes network provider.

Here are some examples of advanced network configuration:

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed.

You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.

Frequently Asked Questions

What are the advantages of OpenShift baremetal?

OpenShift baremetal offers faster performance and direct hardware access, making it ideal for resource-intensive applications that require GPUs or specialized network cards

Does Kubernetes run on bare metal?

Yes, Kubernetes can run on bare metal, offering more control over hardware and enhanced performance. Running Kubernetes on bare metal provides a more direct and efficient deployment experience.

Glen Hackett

Writer

Glen Hackett is a skilled writer with a passion for crafting informative and engaging content. With a keen eye for detail and a knack for breaking down complex topics, Glen has established himself as a trusted voice in the tech industry. His writing expertise spans a range of subjects, including Azure Certifications, where he has developed a comprehensive understanding of the platform and its various applications.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.