
Openshift storage solutions are designed to provide scalable data management for growing businesses. This is achieved through the use of container-native storage, which allows for efficient storage and retrieval of data.
For example, OpenShift provides a scalable and highly available storage solution through its integration with popular storage solutions like Red Hat Ceph Storage and Red Hat Gluster Storage. These solutions provide a robust and scalable storage infrastructure that can handle large amounts of data.
By using OpenShift storage solutions, businesses can ensure that their data is always available and can be easily scaled up or down as needed. This is particularly important for businesses that experience rapid growth or have fluctuating storage needs.
OpenShift also provides a range of storage classes, including local storage, persistent volumes, and stateful sets, which can be used to manage data in different ways.
OCS Architecture
OCS is built specifically for container environments, providing container-native storage services that are highly integrated with OpenShift Container Platform (OCP).
OCS architecture is designed to give data a permanent place to live, even when containers spin up and down.
This means that OCS can easily scale across various deployments, including bare metal, virtual, container, and cloud environments.
If this caught your attention, see: Openshift Container Registry
OCS Architecture Overview
OCS is highly integrated with OpenShift Container Platform, making it a seamless solution for container-native storage services. It's designed to deliver persistent storage services that are highly available, dynamic, and stateful.
OCS supports multiple protocols, including Block Storage, File Storage, and Object Storage, making it a single unified platform that meets all container application needs. This support includes RWX (Read Write many) access modes, which are not commonly found in other storage solutions.
The architecture of OCS is built around a software-defined storage solution that enables dynamic provisioning of Persistent Volumes (PVs). This means developers can get rid of waiting time needed earlier with static provisioning, making it a more efficient solution.
OCS is tightly integrated with OpenShift, allowing it to cater to all storage needs of container applications as well as OpenShift internal services, such as Logging, metrics, and registry. This integration also enables registry service to be deployed with high availability design.
Curious to learn more? Check out: Openshift Platform as a Service

Here are the key components of OCS architecture:
- External CSI Controllers: Deploy one or multiple pods with containers such as attacher, provisioner, and CSI driver container.
- CSI Driver DaemonSet: Runs a CSI driver-installed pod on each node to mount storage provided by a CSI driver.
- CSI Driver Registrar: Registers a CSI driver into an openshift-node service running on a node.
These components work together to provide a seamless and efficient storage solution for container applications running on OpenShift.
Cloud OnTap
Cloud OnTap is a key component of OpenShift Container Storage, providing secure and proven storage management services on cloud platforms like AWS, Azure, and Google Cloud.
Cloud Volumes ONTAP, the foundation of Cloud OnTap, can scale up to petabytes of capacity, making it suitable for various enterprise use cases.
With Cloud Volumes ONTAP, you get high availability, data protection, storage efficiencies, Kubernetes integration, and more.
It supports Kubernetes Persistent Volume provisioning and management requirements of containerized workloads, ensuring seamless integration with OpenShift Container Storage.
This means you can easily manage and scale your storage needs as your containerized applications grow, without worrying about storage limitations.
Cloud Volumes ONTAP has been designed to address the challenges of containerized applications, and its case studies demonstrate its effectiveness in real-world scenarios.
Recommended read: Kubernetes vs Openshift
Storage Types
OpenShift Container Platform storage is broadly classified into two categories: ephemeral storage and persistent storage.
Ephemeral storage is always made available in the primary partition, which can be shared between user pods, the OS, and Kubernetes system daemons. This partition holds the kubelet root directory and /var/log/ directory by default.
There are two basic ways of creating the primary partition: root and runtime. If the runtime partition exists, the root partition does not hold any image layer or other writable storage.
Here are the types of ephemeral storage:
- EmptyDir volumes
- Container logs
- Image layers
- Container-writable layers
OpenShift Container Platform supports the following persistent volume plugins:
Types of PVs
OpenShift Container Platform supports a wide range of persistent volume plugins, which are essential for storing data that needs to persist even after a pod is deleted.
One of the most popular plugins is AWS Elastic Block Store (EBS), which provides a highly available and durable storage solution.
Another option is Azure Disk, which allows you to create and manage disks in Azure.
Azure File is also supported, enabling you to share files between containers and pods.
OpenShift Container Platform also supports Cinder, a popular open-source block storage solution.
You can also use Fibre Channel for high-performance storage needs.
GCE Persistent Disk is another option, providing durable and highly available storage.
HostPath is a plugin that allows you to use a local directory as a persistent volume.
iSCSI is also supported, enabling you to use an iSCSI target as a persistent volume.
Local volume is a plugin that allows you to use a local directory as a persistent volume.
NFS is also supported, enabling you to use a Network File System as a persistent volume.
OpenStack Manila is a plugin that allows you to use OpenStack block storage as a persistent volume.
Red Hat OpenShift Container Storage is a plugin that provides a highly available and scalable storage solution.
VMware vSphere is also supported, enabling you to use a vSphere datastore as a persistent volume.
Here is a list of the supported persistent volume plugins:
- AWS Elastic Block Store (EBS)
- Azure Disk
- Azure File
- Cinder
- Fibre Channel
- GCE Persistent Disk
- HostPath
- iSCSI
- Local volume
- NFS
- OpenStack Manila
- Red Hat OpenShift Container Storage
- VMware vSphere
GCE Disk
GCE Disk is a type of persistent volume plugin supported by OpenShift Container Platform. It's a great option for those familiar with Kubernetes and GCE.
OpenShift Container Platform supports GCE Persistent Disk volumes, also known as gcePD. This allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without knowing the underlying infrastructure.
You can provision your OpenShift Container Platform cluster with persistent storage using GCE. This is made possible by the Kubernetes persistent volume framework.
GCE Persistent Disk volumes can be provisioned dynamically. This means you can easily create and manage storage as needed.
Here are the supported types of GCE Persistent Disk volumes:
- GCE Persistent Disk
Storage Provisioning
Storage provisioning in OpenShift is a crucial aspect of managing persistent storage for your applications. You can configure one or more dynamic provisioners to provision storage and a matching PV in response to requests from a developer defined in a PVC.
A cluster administrator can also create a number of PVs in advance that carry the details of the real storage available for use. This allows for more control over the storage resources allocated to applications.
To differentiate and delineate storage levels and usages, storage classes are used. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
Here are the steps to create a storage class:
- Click Storage → Storage Classes in the OpenShift Container Platform console.
- Click Create Storage Class.
- Define the desired options on the page that appears.
- Click Create to create the storage class.
Provision
Provisioning storage is a crucial step in setting up a cluster. A cluster administrator can configure one or more dynamic provisioners that provision storage in response to requests from a developer defined in a PVC.
To provision storage, a cluster administrator can also create a number of PVs in advance that carry the details of the real storage available for use. This approach allows for more control and flexibility.
In an OpenShift Container Platform environment, provisioning storage involves defining a storage class. This can be done through the console by clicking Storage → Storage Classes and then clicking Create Storage Class.
To create a storage class, you'll need to define the desired options on the page that appears. This typically includes specifying the storage level and usage. The options will vary depending on your specific environment and requirements.
Here are the general steps to create a storage class:
- In the OpenShift Container Platform console, click Storage → Storage Classes.
- Click Create Storage Class.
- Define the desired options on the page that appears.
- Click Create to create the storage class.
By following these steps and defining a storage class, you can obtain dynamically provisioned persistent volumes and differentiate and delineate storage levels and usages.
Bind Claims
Bind claims are a crucial part of the storage provisioning process in OpenShift Container Platform. They are essentially requests for a specific amount of storage, specifying the required access mode and creating a storage class to describe and classify the storage.
The control loop in the master watches for new PVCs and binds the new PVC to an appropriate PV. If an appropriate PV does not exist, a provisioner for the storage class creates one. This ensures that the PVC is bound to the smallest PV that matches all other criteria, minimizing excess storage.
Claims remain unbound indefinitely if a matching volume does not exist or can not be created with any available provisioner servicing a storage class. This means that if a PVC is created but no matching PV is available, it will remain unbound until a matching PV is created or becomes available.
Here's a summary of the bind claim process:
- A PVC is created with a specific storage request
- The control loop watches for new PVCs and binds them to an appropriate PV
- If no matching PV exists, a provisioner creates one
- Claims are bound to the smallest PV that matches all other criteria
By understanding how bind claims work, you can ensure that your storage provisioning process is efficient and effective.
HostPath
HostPath is a storage option in OpenShift Container Platform that allows you to mount a host directory as a file system within a container. This can be useful for development, testing, and debugging purposes.
You can statically provision a hostPath volume by defining a PersistentVolume object with a hostPath specification. This involves creating a YAML file with the PersistentVolume object definition, specifying the hostPath path, and then creating the PV from the file.
Related reading: Object Storage Google
To statically provision a hostPath volume, you'll need to create a YAML file, such as pv.yaml, with the following structure:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
```
This YAML file specifies the hostPath path as /mnt/data on the cluster's node.
Alternatively, you can use the HostPath plugin to dynamically provision a hostPath volume. However, this requires a PersistentVolumeClaim (PVC) to exist that is mapped to the underlying hostPath share.
To mount a hostPath share in a privileged pod, you'll need to create a privileged pod that mounts the existing PVC. This involves creating a YAML file with the Pod object definition, specifying the securityContext as privileged, and mounting the hostPath share at a specific path.
Here's an example YAML file for a privileged pod that mounts a hostPath share:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
containers:
...
securityContext:
privileged: true
volumeMounts:
- mountPath: /data
name: hostpath-privileged
...
volumes:
- name: hostpath-privileged
persistentVolumeClaim:
claimName: task-pvc-volume
```
This YAML file specifies the securityContext as privileged and mounts the hostPath share at /data.
If this caught your attention, see: Openshift Pod
Storage Management
Storage management in OpenShift is a crucial aspect of ensuring efficient and scalable infrastructure. You can manage ephemeral storage within a project by setting quotas that define the limit ranges and number of requests for ephemeral storage across all pods in a non-terminal state.
Developers can also set requests and limits on ephemeral storage at the pod and container level. This allows for fine-grained control over resource utilization.
To create a storage class, you'll need to follow these steps: Click Storage → Storage Classes in the OpenShift Container Platform console. Click Create Storage Class. Define the desired options on the page that appears. Click Create to create the storage class. This process enables the dynamic provisioning of persistent volumes.
Storage classes are used to differentiate and delineate storage levels and usages, allowing users to obtain dynamically provisioned persistent volumes. By default, PVs are set to Retain, but administrators can change this to Reclaim to enable automatic reclamation tasks based on policies set on each persistent volume.
Management
Managing your storage effectively is key to a smooth-running project. Cluster administrators can manage ephemeral storage within a project by setting quotas that define the limit ranges and number of requests for ephemeral storage across all pods in a non-terminal state.
Developers can set requests and limits on ephemeral storage at the pod and container level, giving them more control over their compute resources. This allows for more flexibility and customization.
Enforcing Disk Quotas
Enforcing disk quotas is crucial for managing storage resources in a Kubernetes cluster. You can use LUN partitions to enforce disk quotas and size constraints, with each LUN being one persistent volume.
Kubernetes enforces unique names for persistent volumes, ensuring that each volume has a distinct identity. This allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.
You can also use disk partitions to enforce disk quotas and size constraints in OpenShift Container Platform. Each partition can be its own export, with each export being one persistent volume.
Curious to learn more? Check out: What Is Google One Storage
Here are the ways to enforce disk quotas in OpenShift Container Platform:
- Use LUN partitions with unique names for persistent volumes.
- Use disk partitions with each partition being its own export.
Azure Disk storage class also supports enforcing disk quotas. If kind is set to Managed, Azure creates new managed disks. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk.
Logging and Auditing
Logging and Auditing is a crucial aspect of Storage Management.
MinIO supports outputting logs to the Elastic Stack (or third parties) for analysis and alerting. This feature allows for a more comprehensive understanding of storage activities.
Enabling MinIO auditing generates a log for every operation on the object storage cluster. This level of detail is essential for troubleshooting and monitoring storage performance.
MinIO also logs console errors for operational troubleshooting purposes. These logs can help identify and resolve issues quickly and efficiently.
For another approach, see: Openshift Logs
Frequently Asked Questions
What is storage in OpenShift?
Storage in OpenShift refers to the management and deployment of cloud storage and data services through the Red Hat OpenShift Container Platform. It simplifies the process of accessing and utilizing storage resources within your OpenShift environment
What is the best storage for OpenShift?
For OpenShift, block storage is the preferred choice for optimal performance and efficiency
What size is OpenShift storage?
OpenShift storage is available in three sizes: 0.5 TiB, 2 TiB, and 4 TiB. Learn more about how to choose the right storage size for your OpenShift needs.
Sources
- https://www.linkedin.com/pulse/analysis-openshift-container-storage-ocs-champion-jitender-kohli
- https://docs.redhat.com/en/documentation/openshift_container_platform/4.8/html-single/storage/index
- https://min.io/product/private-cloud-red-hat-openshift
- https://k21academy.com/openshift/openshift-container-storage/
- https://bluexp.netapp.com/blog/cvo-blg-openshift-container-storage-an-in-depth-look
Featured Images: pexels.com