Deploying an NFS server on OpenShift is a great way to share files across a cluster, allowing multiple pods to access the same data.
To get started, you'll need to create an NFS server pod in your OpenShift cluster.
The NFS server pod will need to be configured with a persistent volume claim to store the shared data.
You can use the OpenShift CLI to create the NFS server pod and configure the persistent volume claim.
The NFS server pod will need to be exposed to the cluster via a service, so that other pods can access it.
This can be done using the OpenShift CLI to create a service for the NFS server pod.
Provisioning
To provision an NFS volume in OpenShift Container Platform, you need to create an object definition for the Persistent Volume (PV). This involves specifying the name, capacity, and access modes of the volume, as well as the NFS server and export path.
The name of the volume is its identity in various oc commands, and it should be unique. The capacity of the volume is the amount of storage allocated to it, which in this case is 5Gi.
The access modes of the volume are used to match a PVC to a PV, but currently, no access rules are enforced based on these modes. Instead, they act as labels to facilitate the binding process.
The volume type being used is the nfs plugin, and the path that is exported by the NFS server is /tmp. The hostname or IP address of the NFS server is 172.17.0.2.
Each NFS volume must be mountable by all schedulable nodes in the cluster. To verify that the PV was created, you can use the oc get pv command.
Here is a summary of the PV creation process:
Once the PV is created, you can create a persistent volume claim (PVC) that binds to the new PV. The PVC should specify the access modes and storage capacity required.
NFS Configuration
To configure persistent storage on OpenShift v3, you'll need to use a storage solution like NFS, ISCSI, Ceph RDB, or GlusterFS.
For a lab environment, you can configure an NFS server on the OpenShift v3 master, but for production environments, it's recommended to use external storage.
The first step is to install the NFS server and start its services.
Configure NFS Server
To configure an NFS server on an OpenShift v3 master, you'll need to install the NFS server and start the services. This is a crucial step in setting up persistent storage.
For a lab environment, using an NFS server on the OpenShift v3 master is fine, but in production environments, you'll want to use external storage for obvious reasons.
You'll need to allow access through iptables, as OpenShift v3 uses iptables and not firewalld. This is a key difference to keep in mind.
By default, SELinux sVirt policy prevents containers from writing to NFS shares, so you'll need to allow SELinux policy for sVirt to write to nfs shares.
Configure NFS Client
To configure an NFS client, you'll need to install the nfs-utils package on all nodes. This allows the nodes to mount NFS shares.
The nfs-utils package is a must-have for any NFS client configuration. I've seen it make all the difference in getting NFS shares up and running smoothly.
On OpenShift v3 nodes, you'll specifically want to install nfs-utils to enable mounting of NFS shares. This is a crucial step in setting up a functional NFS client configuration.
Persistent Storage
OpenShift v3 supports using persistent storage through Kubernetes storage plugins, including NFS, ISCSI, Ceph RBD, and GlusterFS. Red Hat has contributed these plugins to Kubernetes.
To configure persistent storage, the storage must be available to all OpenShift v3 nodes using NFS, ISCSI, Ceph RBD, or GlusterFS. This is because Kubernetes deploys Docker containers within a pod and is responsible for storage configuration.
You can create a pool of persistent volumes in Kubernetes, with each volume mapped to an external storage file system. The Kubernetes scheduler decides where to deploy the pod, and external storage is mounted on that node and presented to all containers within the pod. If persistent storage is no longer needed, it can be reclaimed and made available to other pods.
Configure Persistent Storage
To configure persistent storage, you need to make the storage available to all OpenShift v3 nodes using NFS, ISCSI, Ceph RDB, or GlusterFS.
First, install the NFS server on the OpenShift v3 master. This is a crucial step that will allow you to configure persistent storage.
You can use NFS as an example, as it is one of the supported storage plugins for OpenShift v3. To configure NFS, you need to start the NFS server services on the OpenShift v3 master.
For a lab environment, configuring an NFS server on the OpenShift v3 master is sufficient. However, for production environments, you should use external storage for obvious reasons.
Once you have the NFS server up and running, you can configure the storage to be available to all OpenShift v3 nodes. This will allow you to create persistent volumes and configure them for your OpenShift v3 pods.
NFS Volume Security
NFS volume security is crucial for OpenShift Container Platform. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux.
Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the volumes section of their Pod definition. This allows them to access the NFS directory with the same POSIX ownership and permissions found on the exported NFS directory.
The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs.
The container must match SELinux labels, and either run with a UID of 65534, the nfsnobody owner, or with 5555 in its supplemental groups to access the directory. The owner ID of 65534 is used as an example, but it's not required for NFS exports.
Here's a summary of the required settings:
By following these guidelines, you can ensure secure access to your NFS volumes in OpenShift Container Platform.
Configure Persistent Volumes
To configure persistent volumes, you need to create a JSON or YAML template file, and in this example, we'll use JSON.
Kubernetes supports both JSON and YAML, and you'll need to replace the IP address with the IP of your OpenShift v3 master.
You can create a pool of 20 persistent volumes, which will be used to store data.
To automate the process, you can create a for loop that will create the NFS shares, set permissions, and create persistent volumes in OpenShift v3.
This will make it easier to manage your persistent storage and ensure that it's available to all nodes.
You can now list the persistent storage volumes in OpenShift v3, and notice that we have no claims yet.
Sources
- https://docs.openshift.com/container-platform/4.15/storage/persistent_storage/persistent-storage-nfs.html
- https://docs.cloudera.com/machine-learning/1.5.4/private-cloud-requirements/topics/ml-pvc-nfs-ocp.html
- https://keithtenzer.com/openshift/openshift-v3-unlocking-the-power-of-persistent-storage/
- https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-nfs.html
- https://meatybytes.io/posts/openshift/general-engineering/homelab/storage/nfs/
Featured Images: pexels.com