Openshift Fusion with ECE: A Comprehensive Guide

Author

Reads 648

Computer server in data center room
Credit: pexels.com, Computer server in data center room

Openshift Fusion with ECE is a powerful combination that can help you streamline your development and deployment processes. It's a game-changer for teams that want to work more efficiently.

With Openshift Fusion, you can easily integrate your ECE (Enterprise Container Engine) setup with your existing infrastructure. This integration enables features like automated deployment and scaling of your applications.

By leveraging the strengths of both Openshift and ECE, you can create a seamless development-to-production pipeline that reduces errors and increases productivity.

Installation

To install the EDB Postgres for Kubernetes Operator, you can use the OpenShift web console or the oc command-line interface. The web console offers a cluster-wide installation option, which makes the operator available in all namespaces, but requires careful consideration of user roles and permissions.

You can install the operator in multiple namespaces using the oc CLI, which provides a flexible and powerful approach. This method involves creating an OperatorGroup and a Subscription object, and can be used for single project binding as well.

Credit: youtube.com, Unleashing the Power of Data Storage: IBM Storage Fusion & Red Hat OpenShift Integration 🌐💾

Here are the general steps for installing the operator using the oc CLI:

  1. Check that the cloud-native-postgresql operator is available from the OperatorHub.
  2. Inspect the operator to verify the installation modes and available channels.
  3. Create an OperatorGroup object in the target namespace, specifying the target namespaces.
  4. Create a Subscription object in the target namespace to subscribe to the desired channel.
  5. Apply the YAML file definitions for the OperatorGroup and Subscription objects using oc apply -f.

Install NVIDIA GPU Operator

Installing the NVIDIA GPU Operator is a straightforward process. You can find detailed guidance on the OpenShift documentation.

To start, you'll want to follow the guidance in Installing the NVIDIA GPU Operator on OpenShift. This will walk you through the entire installation process.

The NVIDIA GPU Operator is designed to work seamlessly with OpenShift Container Platform. In fact, for bare metal or VMware vSphere with GPU Passthrough, you don't need to make any changes to the ClusterPolicy.

To verify the successful installation of the NVIDIA GPU Operator, you'll want to check the pods in the OpenShift Container Platform web console. From the side menu, select Workloads > Pods, and then select the nvidia-gpu-operator project.

Here's a list of the pods you should see after a successful installation:

Cluster-Wide Installation with OC

You can install the operator globally using oc, taking advantage of the default OperatorGroup called global-operators in the openshift-operators namespace. This approach makes the operator available in all namespaces.

Credit: youtube.com, Installing the Automation Broker with oc cluster up

To install the operator globally, you'll need to create a new Subscription object for the cloud-native-postgresql operator in the openshift-operators namespace. This can be done using the oc apply -f command.

Here are the steps to follow:

* Create a new Subscription object in the openshift-operators namespace with the following YAML file definition:

```yaml

apiVersion: operators.coreos.com/v1alpha1

kind: Subscription

metadata:

name: cloud-native-postgresql

namespace: openshift-operators

spec:

channel: fast

name: cloud-native-postgresql

source: certified-operators

sourceNamespace: openshift-marketplace

```

* Use oc apply -f with the above YAML file definition to create the Subscription object.

By following these steps, you can install the operator globally using oc and make it available in all namespaces.

Configuration

In OpenShift Fusion with ECE, you can configure the cluster by specifying the number of worker nodes. This number can range from 1 to 10.

The default configuration for OpenShift Fusion with ECE is a three-node cluster, which includes one master node and two worker nodes.

You can also configure the network policies to control the flow of traffic between pods. For example, you can create a policy to allow traffic from one pod to another.

To configure the storage, you can specify the size of the persistent volume claims (PVCs) for each namespace. This can be done using the OpenShift console or through the command-line interface (CLI).

Security

Credit: youtube.com, Get Started with Security Context Constraints on Red Hat OpenShift

By default, OpenShift prevents containers from running as root, which can cause issues with applications that require persistent storage. Containers are run using an arbitrarily assigned user ID.

To request and manage storage through OpenShift, users need the appropriate permissions and Security Context Constraints (SCC). Modifying container security to work with OpenShift is outside the scope of this document.

EDB Postgres for Kubernetes on OpenShift supports the restricted and restricted-v2 SCC, which vary depending on the version of EDB Postgres for Kubernetes and OpenShift being used.

Here's a table showing the supported SCCs for different versions of EDB Postgres for Kubernetes and OpenShift:

In OpenShift 4.11, the default SCCs are restricted-v2, nonroot-v2, and hostnetwork-v2. However, EDB Postgres for Kubernetes only works with the restricted and restricted-v2 SCCs.

Security Model

OpenShift has a default security model that prevents containers from running as root. Containers are run using an arbitrarily assigned user ID.

To deploy applications that require persistent storage, users need to have the appropriate permissions and Security Context Constraints (SCC) to request and manage storage through OpenShift.

Credit: youtube.com, Fundamental Concepts of Security Models - CISSP

Modifying container security to work with OpenShift is outside the scope of this document. For more information on OpenShift security, see Managing security context constraints.

If you run into issues writing to persistent volumes provisioned by the HPE CSI Driver under a restricted SCC, add the fsMode: "0770" parameter to the StorageClass with RWO claims or fsMode: "0777" for RWX claims.

EDB Postgres for Kubernetes supports the restricted and restricted-v2 SCCs on OpenShift. The supported SCCs vary depending on the version of EDB Postgres for Kubernetes and OpenShift you are running.

Here's a table showing the supported SCCs for different versions of EDB Postgres for Kubernetes and OpenShift:

Note that since version 4.10 only provides restricted, EDB Postgres for Kubernetes versions 1.18 and 1.19 support restricted. Future releases of EDB Postgres for Kubernetes are not guaranteed to support restricted.

OADP for Velero

The EDB Postgres for Kubernetes operator recommends using the Openshift API for Data Protection operator for managing Velero in OpenShift environments.

Credit: youtube.com, Utilizing OADP and Migration Toolkit for Efficient, Flexible App Backups in Containers

Specific details about how EDB Postgres for Kubernetes integrates with Velero can be found in the Velero section of the Addons documentation.

The OADP operator is a community operator that is not directly supported by EDB.

You don't need the OADP operator to use Velero with EDB Postgres, but it's a convenient way to install Velero on OpenShift.

Storage

In OpenShift Fusion with ECE, the StorageProfile plays a crucial role, especially when using OpenShift Virtualization.

If you're using OpenShift Virtualization and want to enable Live Migration for virtual machines, you'll need to update the StorageProfile to "ReadWriteMany" for PVCs cloned from the "openshift-virtualization-os-images" Namespace.

Recent OpenShift EUS releases, starting from v4.12.11, have corrected the default StorageProfile for "csi.hpe.com", so you might not need to take these steps.

If you're using the default StorageClass named "hpe-standard", you'll need to replace the spec with a new one that includes the "ReadWriteMany" access mode.

This will ensure that your PVCs are re-created with the correct access mode, simplifying workflows for users.

The HPE CSI Driver v2.5.0 has resolved the accessMode transformation issue for block volumes from RWO PVC to RWX clone, making it easier to use source RWX PVs.

To patch the hpevolumeinfoCRD, you'll need to install a specific version of the HPE CSI Driver, including v2.5.0 or later.

Architecture

Credit: youtube.com, Openshift Container Platform (OCP) architecture with DEMO | Openshift Architecture

In OpenShift, the number of availability zones or data centers for your environment plays a critical role in determining the architecture.

Having an OpenShift cluster spanning 3 or more availability zones is recommended to fully exploit EDB Postgres for Kubernetes, as outlined in the "Disaster Recovery Strategies for Applications Running on OpenShift" blog article.

If your OpenShift cluster has only one availability zone, it becomes your Single Point of Failure (SPoF) from a High Availability standpoint.

Container Platform

Container platforms like OpenShift provide a way to deploy and manage software in a scalable and efficient manner. OpenShift 4 follows the Operator pattern for software deployment.

CSI drivers, including the HPE CSI driver, are deployed on OpenShift 4. The next step is to create a HPECSIDriver object.

OpenShift 4 offers a robust platform for managing containerized applications. This platform allows for the creation of HPECSIDriver objects, which is a crucial step in deploying the HPE CSI driver.

The Operator pattern is a key feature of OpenShift 4, enabling the deployment of software like CSI drivers in a managed and scalable way.

Container Platform with GPU Options

Credit: youtube.com, GPUs: Explained

If you're using OpenShift Container Platform on bare metal or VMware vSphere with GPU Passthrough, you can skip modifying the ClusterPolicy.

For OpenShift Container Platform bare metal or VMware vSphere with GPU Passthrough, you don't need to make changes to the ClusterPolicy.

The NVIDIA GPU Operator can be installed on OpenShift by following the guidance in Installing the NVIDIA GPU Operator on OpenShift.

You can use the vGPU driver with bare metal and VMware vSphere VMs with GPU Passthrough.

In this case, follow the guidance in the section “OpenShift Container Platform on VMware vSphere with NVIDIA vGPU”.

Setting pciPassthru.use64bitMMIO to TRUE is an option for using the vGPU driver.

Red Hat on VMware

Red Hat on VMware is a powerful combination that requires some special considerations. To install OpenShift on vSphere, follow the steps outlined in the RedHat OpenShift documentation, specifically the Installing vSphere section.

You'll need to change the boot method of each VM that's deployed as a worker and the VM template to be EFI, which requires powering down running worker VMs. The template must be converted to a VM, then the boot method changed to EFI, and then converted back to a template.

Secure boot needs to be disabled, and for the UPI install method, change the boot method to EFI before continuing to Step 9. To support GPUDirect RDMA, the VM needs to have the following configuration parameters set: pciPassthru.allowP2P=true and pciPassthru.RelaxACSforP2P=true.

Access and Permissions

Credit: youtube.com, Ask an OpenShift Admin (Ep 33): Authentication and authorization

Access and Permissions is a crucial aspect of OpenShift Fusion with ECE. You can rely on the cluster-admin role to manage resources centrally, but it's not the only option.

The RBAC framework in Kubernetes/OpenShift allows you to create custom roles and bind them to specific users or groups. For example, you can bind the clusters.postgresql.k8s.enterprisedb.io-v1-admin cluster role to a specific user in a specific namespace.

Predefined cluster roles are also available for use with EDB Postgres for Kubernetes CRDs. These roles include admin, edit, view, and crdview suffixes, which can be reused across multiple projects while allowing customization within individual projects.

Here are the predefined cluster roles available for EDB Postgres for Kubernetes CRDs:

  • admin
  • edit
  • view
  • crdview

You can verify the list of predefined cluster roles by running the provided command, and inspect an actual role using the get command. For example, the clusters.postgresql.k8s.enterprisedb.io-v1-admin cluster role enables everything on the cluster resource defined by the postgresql.k8s.enterprisedb.io API group.

Access to EDB Private Registry

Credit: youtube.com, Demo Day: Uploading images to a Private Registry

To access the EDB private registry, you'll need a valid EDB subscription plan. This grants you access to the private repository where the operator and operand images are stored.

You'll need to create a pull secret in the openshift-operators namespace. This secret is used to access the operand and operator images in the private repository.

The pull secret should be named postgresql-operator-pull-secret for the EDB Postgres for Kubernetes operator images. You can create each secret using the oc create command.

To do this, replace @@REPOSITORY@@ with the name of the repository, as explained in "Which repository to choose?" and @@TOKEN@@ with the repository token for your EDB account, as explained in "How to retrieve the token".

Users and Permissions

Users and Permissions are crucial to manage resources effectively in EDB Postgres for Kubernetes. You can rely on the cluster-admin role for central management, but using the RBAC framework is a more flexible and secure approach.

Credit: youtube.com, Linux File Permissions in 5 Minutes | MUST Know!

The RBAC framework allows you to bind specific roles to users or groups in a specific namespace, just like with any other cloud-native application. For example, you can bind the clusters.postgresql.k8s.enterprisedb.io-v1-admin cluster role to a specific user in the web-prod project.

You can also create specific namespaced roles, like in the example where a role is created with administration permissions for all resources managed by the operator in the web-prod namespace.

In addition to namespaced roles, cluster roles are another way to manage permissions. For every CRD owned by EDB Postgres for Kubernetes' CSV, predefined cluster roles are deployed, including admin, edit, view, and crdview roles.

You can verify the list of predefined cluster roles by running a specific command, and you can inspect an actual role as any other Kubernetes resource. For example, the clusters.postgresql.k8s.enterprisedb.io-v1-admin cluster role enables everything on the cluster resource defined by the postgresql.k8s.enterprisedb.io API group.

Here are the four predefined cluster roles:

These cluster roles can be reused across multiple projects while allowing customization within individual projects through local roles.

Web Console

Credit: youtube.com, User Management and Access Control in OpenResty Edge’s Web Console

To access the OpenShift web console, you'll need to login as kube:admin. Navigate to Operators -> OperatorHub.

Search for 'HPE CSI' in the search field and select the non-marketplace version. The latest supported HPE CSI Operator on OpenShift 4.14 is 2.4.2.

Select the Namespace where the SCC was applied, choose 'Manual' Update Approval, and click 'Install'. This will install the HPE CSI Operator.

Once installed, select 'View Operator' to inspect the Operator. By navigating to the Developer view, you can inspect the CSI driver and Operator topology.

The CSI driver is now ready for use.

Frequently Asked Questions

What is the difference between IBM Fusion and Fusion HCI?

IBM Storage Fusion is an operator installed on Red Hat OpenShift, while IBM Storage Fusion HCI System is a complete rack of devices with both OpenShift and Fusion software pre-installed. The key difference lies in their installation and configuration requirements.

What is IBM Red Hat OpenShift?

IBM Red Hat OpenShift is a fully managed cloud service for building, deploying, and scaling critical applications with built-in security. It's designed to help organizations efficiently manage their applications in the cloud.

Jeannie Larson

Senior Assigning Editor

Jeannie Larson is a seasoned Assigning Editor with a keen eye for compelling content. With a passion for storytelling, she has curated articles on a wide range of topics, from technology to lifestyle. Jeannie's expertise lies in assigning and editing articles that resonate with diverse audiences.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.