Openshift version 2.0.3 was released in July 2013, marking a significant milestone in the platform's evolution.
This version included several key updates, such as improved scalability and performance, as well as enhanced security features.
One notable feature of Openshift 2.0.3 was the introduction of Docker support, which allowed developers to use Docker containers within their applications.
This update provided a more flexible and efficient way to deploy and manage applications, making it easier for developers to work with Openshift.
The Openshift team continued to improve the platform with regular updates, including version 2.1.0, which was released in October 2013.
This version brought further enhancements to scalability, security, and ease of use, making it easier for developers to get started with Openshift.
As Openshift continued to evolve, it became clear that version 2.0.3 marked a significant turning point in the platform's development, paving the way for future innovations and improvements.
Release Notes
Red Hat OpenShift Container Platform version 3.9 is now available, offering a range of new features and improvements.
This release is based on OpenShift Origin 3.9 and includes new features, changes, bug fixes, and known issues. Red Hat decided to skip version 3.8 and release 3.9 directly after 3.7 to better synchronize versions with Kubernetes.
OpenShift Container Platform 3.9 is supported on RHEL 7.3 and 7.4 with the latest packages from Extras, including Docker 1.13. It also supports Atomic Host 7.4.5 and newer.
Here are some key features and changes in OpenShift Container Platform 3.9:
- Added an option to the must-gather data collection tool that gathers information from a specified control plane namespace.
- Improved performance for control planes with hundreds of namespaces
2.0.3
Red Hat OpenShift Container Platform 3.9 is now available, and it's based on OpenShift Origin 3.9. This release includes new features, changes, bug fixes, and known issues.
Red Hat did not release OpenShift Container Platform 3.8 publicly, instead opting to release 3.9 directly after version 3.7. This decision impacts installation and upgrade processes, so be sure to check the Installation section for more information.
OpenShift Container Platform 3.9 is supported on RHEL 7.3 and 7.4 with the latest packages from Extras, including Docker 1.13. It's also supported on Atomic Host 7.4.5 and newer. The docker-latest package is now deprecated.
This release introduces the following notable technical changes:
- Maistra Service Mesh 2.0.3 is included in this release.
Maistra Service Mesh 2.0.3 brings the following new features:
- An option to gather information from a specified control plane namespace has been added to the must-gather data collection tool. For more information, see OSSM-351.
- Performance for control planes with hundreds of namespaces has been improved.
Deprecated 2.0
In Maistra Service Mesh 2.0, the Mixer component was deprecated and will be removed in release 2.1.
Extensions should have been migrated to the new WebAssembly mechanism, but using Mixer for implementing extensions was still supported in release 2.0.
The Mixer component is no longer supported, and its removal is a significant change.
The following resource types are no longer supported in Maistra Service Mesh 2.0:
Installation and Upgrade
You can install the cloud-native-postgresql operator globally using oc, which makes it available in all namespaces. This is done by creating a Subscription object in the openshift-operators namespace.
To upgrade the operator, you'll need to move to a stable-vX.Y channel or the fast channel, which follows the head of the development trunk of EDB Postgres for Kubernetes. The last supported version of 1.15.x was released in October 2022, and the last supported version of 1.16.x was released in December 2022.
If you're currently in the stable channel and your operator version is 1.15 or older, you should move to stable-v1.15 and upgrade, then repeat the process with stable-v1.16, stable-v1.17, and finally reach stable-v1.18 or fast.
Cluster-Wide Installation with OC
If you prefer a cluster-wide installation, you can use oc to install the operator globally.
You'll need to take advantage of the default OperatorGroup called global-operators in the openshift-operators namespace.
This OperatorGroup is a built-in feature that allows you to install operators across all namespaces in your cluster.
By creating a new Subscription object for the cloud-native-postgresql operator in the same namespace, you can make the operator available in all namespaces.
Once you run oc apply -f with the above YAML file, the operator will be available in all namespaces.
Upgrading the Operator
To upgrade your operator safely, you need to be in a stable-vX.Y channel, or the fast channel if you want to follow the head of the development trunk of EDB Postgres for Kubernetes.
If you're currently in the stable channel, you'll need to move to the latest Long Term Supported release of EDB Postgres for Kubernetes, which is currently stable-v1.22.
Be aware that if you're in stable and your operator version is 1.15 or older, you'll need to move to stable-v1.15, upgrade, then repeat the process with stable-v1.16 and stable-v1.17 to finally reach stable-v1.18 or fast.
The last supported version of 1.15.x was released in October 2022, and no future updates to this version are planned.
If you're in stable and your operator version is 1.16, you'll need to move to stable-v1.16, upgrade, then repeat the process with stable-v1.17 to finally reach stable-v1.18 or fast.
The last supported version of 1.16.x was released in December 2022, and no future updates to this version are planned.
To avoid issues with upgrading to version 1.16.0, 1.15.2, and onward, you'll need to upgrade to version 1.15.5 first, which will automatically remove the offending conditions from all the cluster CRs that prevent Openshift from upgrading.
3.7 to 3.9 Control Plane Upgrade
Upgrading from OpenShift Container Platform 3.7 to 3.9 is a seamless process, with the installer automatically handling the control plane upgrade from 3.7 to 3.8 to 3.9.
Data migration happens pre- and post-control plane upgrades for OpenShift Container Platform 3.8 and 3.9.
Control plane components are upgraded from 3.7 to 3.9, with the API, controllers, and nodes on control plane hosts being upgraded seamlessly.
Other control plane components, such as the router, registry, service catalog, and brokers, are also upgraded from 3.7 to 3.9.
Nodes are upgraded directly from 3.7 to 3.9 with only one drain of nodes.
OpenShift Container Platform 3.7 nodes can operate indefinitely against 3.8 masters if the upgrade process needs to pause in this state.
Logging and metrics are updated from 3.7 to 3.9.
It's recommended to upgrade the control plane and nodes independently, as this makes the process easier and rollback is more manageable.
Channels
The stable channel was previously used by EDB to distribute cloud-native-postgresql, but it's now obsolete and has been removed.
If you're currently using stable, you have two options for moving off of it: you can move to a stable-vX.Y channel to remain in a minor release, or move to fast, which is the equivalent of stable before we introduced support for multiple minor releases.
EDB Postgres for Kubernetes is available in the following OLM channels: fast and stable-vX.Y. The fast channel contains the latest available patch release in the latest available minor release, while stable-vX.Y branches include only patch versions of the same minor release.
Here are the details of the channels:
Considering both CloudNativePG and EDB Postgres for Kubernetes are developed using the trunk development and continuous delivery DevOps principles, our recommendation is to use the fast channel.
Manual Upgrade Process Unsupported
As of OpenShift Container Platform 3.9, manual upgrades are no longer supported. This means you'll need to explore alternative upgrade methods.
In a future release, the manual upgrade process will be completely removed. This change is likely a result of the growing complexity of the platform and the need for more streamlined upgrade processes.
You may have gotten used to manually upgrading your OpenShift Container Platform, but it's essential to adapt to the new reality.
Security and Compliance
EDB Postgres for Kubernetes on OpenShift supports the restricted and restricted-v2 SCC (SecurityContextConstraints), which vary depending on the version of EDB Postgres for Kubernetes and OpenShift you are running.
The supported SCCs are as follows: EDB Postgres for Kubernetes VersionOpenShift VersionsSupported SCC1.24.x4.12-4.17restricted, restricted-v21.23.x4.12-4.16restricted, restricted-v21.22.x4.12-4.16restricted, restricted-v21.18.x4.10-4.13restricted, restricted-v2
EDB Postgres for Kubernetes drops all capabilities by default, ensuring that it never makes use of any unsafe capabilities. This is in addition to adhering to the Red Hat Certification process and working under the new SCCs introduced in OpenShift 4.11.
Pod Security Standards
Pod Security Standards are crucial for ensuring the security of your Kubernetes cluster. EDB Postgres for Kubernetes supports the restricted and restricted-v2 SCCs, which vary depending on the version of EDB Postgres for Kubernetes and OpenShift you're running.
To understand which SCCs are supported, refer to the following table:
Note that since version 4.10, only restricted SCC is provided, and EDB Postgres for Kubernetes versions 1.18 and 1.19 support restricted SCC. Future releases are not guaranteed to support restricted SCC, as OpenShift 4.11 replaced restricted with restricted-v2.
EDB Postgres for Kubernetes drops all capabilities by default, ensuring that the operator never makes use of any unsafe capabilities. This ensures a secure environment for your PostgreSQL database.
Cluster Roles
Cluster roles are automatically created by the Operator Lifecycle Manager (OLM) to facilitate role binding definitions and granular implementation of RBAC policies.
These cluster roles have specific rules that apply to Custom Resource Definitions, such as those part of EDB Postgres for Kubernetes.
Some cluster roles are part of the broader Kubernetes/OpenShift realm, indicating their scope and applicability.
By automating the creation of cluster roles, OLM simplifies the process of implementing RBAC policies and ensuring security and compliance in Kubernetes environments.
Deprecated and Removed
In OpenShift Container Platform 3.9, several oc secrets subcommands have been deprecated in favor of oc create secret. Specifically, the subcommands newnew-basicauthnew-dockercfgnew-sshauth are no longer recommended for use.
If you're upgrading from a previous release, be aware that some features have been removed. The Mixer component, for example, was removed in Service Mesh 2.1. This means that custom metrics for telemetry must now be obtained using Envoy filter.
Upgrading to Service Mesh 2.1 also requires porting Mixer plugins to WebAssembly Extensions. If you're upgrading from a Service Mesh 2.0.x release, you won't be able to proceed if Mixer plugins are enabled.
Here's a summary of removed features in Service Mesh 2.1:
Features and Enhancements
OpenShift version brings a range of new features and enhancements to the table. One of the key components is Istio 1.9 Support, which introduces a large number of new features and product enhancements.
Maistra 2.1 is based on Istio 1.9, which means you can expect to see improved performance and scalability. However, there are some exceptions to note, including the lack of support for Virtual Machine integration, Kubernetes Gateway API, and remote fetch and load of WebAssembly HTTP filters.
Some features are still in tech preview, such as Request Classification for monitoring traffic and Integration with external authorization systems via Authorization policy's CUSTOM action. These features are being tested and refined, and you can expect to see more information and updates in the future.
Here are some of the key exceptions to note:
Container Orchestration
Container Orchestration is a crucial aspect of OpenShift Container Platform, allowing developers to efficiently manage and scale their applications. Built on Kubernetes, OpenShift provides a secure and scalable multi-tenant operating system.
OpenShift Container Platform supports a wide selection of programming languages and frameworks, making it a versatile platform for developers. This includes languages like Java, Ruby, and PHP, which are commonly used in enterprise-class applications.
By leveraging Kubernetes, OpenShift simplifies the process of deploying and managing applications, reducing the need for manual configuration and management. This makes it easier for organizations to implement a private PaaS that meets their security, privacy, and compliance requirements.
New Enhancements
Maistra 2.1 is based on Istio 1.9, which brings in a large number of new features and product enhancements.
Istio 1.9 introduces several improvements, but some features are still in development. This includes Virtual Machine integration, which is not yet supported.
Kubernetes Gateway API is also not yet supported in Maistra 2.1. This means that users cannot leverage this feature in their current setup.
Custom CA Integration using the Kubernetes CSR API is not supported either. This might be a limitation for some users who rely on this feature.
Request Classification for monitoring traffic is a tech preview feature, which means it's still in development and not fully supported. This might not be suitable for production use cases.
Integration with external authorization systems via Authorization policy's CUSTOM action is another tech preview feature. As with Request Classification, this is still in development and not fully supported.
Here are the unsupported features in Maistra 2.1:
- Virtual Machine integration
- Kubernetes Gateway API
- Remote fetch and load of WebAssembly HTTP filters
- Custom CA Integration using the Kubernetes CSR API
These unsupported features are worth noting for users who plan to migrate to Maistra 2.1.
Default Node Selector and Automatic Labeling
In OpenShift Container Platform 3.9, masters are now marked as schedulable nodes by default.
This change sets the default node selector, which determines which node projects will use by default when placing pods. The default node selector is now set to node-role.kubernetes.io/compute=true unless overridden using the osm_default_node_selector Ansible variable.
Masters are automatically labeled with node-role.kubernetes.io/master=true, which assigns the master node role. This ensures that master nodes are easily identifiable.
Non-master, non-dedicated infrastructure nodes hosts are automatically labeled with node-role.kubernetes.io/compute=true, which assigns the compute node role. This means that nodes with a region=infra label are labeled as compute nodes.
The following automatic labeling occurs for hosts defined in your inventory file during installations and upgrades:
- Non-master, non-dedicated infrastructure nodes hosts (by default, this means nodes with a region=infra label) are labeled with node-role.kubernetes.io/compute=true.
- Master nodes are labeled with node-role.kubernetes.io/master=true.
This ensures that the default node selector has available nodes to choose from when determining pod placement.
CloudForms 4.6 Container Management
CloudForms 4.6 Container Management is now supported by OpenShift Container Platform 3.9. The installation playbooks have been updated to include this support.
With the new version of CloudForms, you can now deploy it on OpenShift Container Platform using the Deploying Red Hat CloudForms on OpenShift Container Platform topics.
The release includes several new features and updates, including:
- OpenShift Container Platform template provisioning
- Offline OpenScapScans
- Alert management: You can choose Prometheus and use it in CloudForms, which is currently in Technology Preview.
- Reporting enhancements
- Provider updates
- Chargeback enhancements
- UX enhancements
CRI-O v1.9
CRI-O v1.9 is a lightweight, native Kubernetes container runtime interface that provides only the runtime capabilities needed by the kubelet. It's designed to be part of Kubernetes and evolve in lock-step with the platform.
CRI-O offers a minimal and secure architecture, excellent scale and performance, and the ability to run any Open Container Initiative (OCI) or docker image. This makes it a versatile and reliable choice for containerized applications.
To install and run CRI-O alongside docker, you need to set the following in the [OSEv3:vars] section of your Ansible inventory file during cluster installation. This setting pulls the openshift3/cri-o system container image from the Red Hat Registry by default.
If you want to use an alternative CRI-O system container image from another registry, you can override the default using a specific variable. This allows for flexibility and customization of your container runtime.
The atomic-openshift-node service must be RPM- or system container-based when using CRI-O; it cannot be docker container-based. This is a requirement for running CRI-O and ensures a secure and stable environment.
Here are the key benefits of using CRI-O:
- A minimal and secure architecture.
- Excellent scale and performance.
- The ability to run any Open Container Initiative (OCI) or docker image.
- Familiar operational tooling and commands.
StatefulSets, DaemonSets, and Deployments Supported
StatefulSets, daemonsets, and deployments are now stable and supported in OpenShift Container Platform.
This means that the technology preview phase is over, and these features are now fully integrated into the platform.
StatefulSets, daemonsets, and deployments are now fully supported, allowing for more reliable and efficient container management.
The core workloads API, which includes these features, has been promoted to GA stability in the apps/v1 group version.
The apps/v1beta2 group version is deprecated, and all new code should use the kinds in the apps/v1 group version.
This change ensures that new code is compatible with the latest version of the platform.
StatefulSets, daemonsets, and deployments are now stable and supported, making it easier to manage complex containerized applications.
Federation
Federation allows you to connect multiple service meshes together, enabling communication between them. This feature is especially useful for large-scale applications that span multiple clusters.
With Service Mesh Federation, you can define a federation between two service meshes using a ServiceMeshPeer resource, which includes configuration for the gateway and root trust certificate.
You can also define which services are available for import or export using the ExportedServiceMeshSet and ImportedServiceSet resources. These resources ensure that services are only imported or exported if they are made available by the peer mesh.
Here are the key resources involved in Service Mesh Federation:
- ServiceMeshPeer: defines a federation with a separate service mesh
- ExportedServiceMeshSet: defines which services for a given ServiceMeshPeer are available for import
- ImportedServiceSet: defines which services for a given ServiceMeshPeer are imported from the peer mesh
Keep in mind that Service Mesh Federation is not supported between clusters on ROSA, ARO, or OSD.
Improved Operator Performance
Improved Operator Performance is a key area where EDB Postgres for Kubernetes has made significant strides. The stable channel has been removed, and users are advised to move to a stable-vX.Y channel or the fast channel.
If you're currently using the stable channel, you have two options: move to a stable-vX.Y channel to remain in a minor release, or switch to the fast channel, which is the equivalent of stable before EDB introduced support for multiple minor releases.
Moving to the fast channel is recommended, as it provides the latest available patch release in the latest available minor release of EDB Postgres for Kubernetes. This means you'll always have access to the latest security patches and bug fixes.
In addition to the channel options, the Maistra Service Mesh operator has also seen improvements. Pruning old resources at the end of every ServiceMeshControlPlane reconciliation has been optimized, resulting in faster ServiceMeshControlPlane deployments and quicker application of changes to existing SMCPs.
Here's a summary of the channel options:
- Fast: The head version is always the latest available patch release in the latest available minor release.
- Stable-vX.Y: The head version is always the latest available patch release in the X.Y minor release.
Improved Playbook Performance
OpenShift Container Platform 3.9 has made significant changes to improve playbook performance.
One of the key improvements is the restructuring of playbooks to reduce unnecessary computations. This is achieved by pushing fact-gathering and common dependencies up into initialization plays, so they're only called once rather than each time a role needs access to their values.
This refactoring has resulted in playbooks that only touch the hosts that are truly relevant to the playbook. This means less overhead and faster execution times.
Here are the specific improvements made to playbooks:
- Restructured playbooks to push all fact-gathering and common dependencies up into the initialization plays.
- Refactored playbooks to limit the hosts they touch to only those that are truly relevant to the playbook.
Known Issues and Limitations
Cluster Limits for OpenShift Container Platform 3.9 have been updated.
There is a known issue with memory consumption during installation and upgrade playbooks, which can cause the control host to consume more memory than expected.
The issue is caused by the use of include_tasks in several places, but it has been addressed with the release of RHBA-2018:0600, which changes the calls to import_tasks that consume less memory.
For large environments with 100+ hosts, a control host with at least 16GiB of memory is recommended to prevent memory issues.
A known issue in the initial GA release of OpenShift Container Platform 3.9 causes the installation and upgrade playbooks to consume more memory than previous releases.
The memory consumption on the control host should be below 100MiB per host after the change.
- Memory consumption on the control host should be below 100MiB per host.
- A control host with at least 16GiB of memory is recommended for large environments (100+ hosts).
Known Issues
In the initial GA release of OpenShift Container Platform 3.9, there's a known issue that causes the installation and upgrade playbooks to consume more memory than previous releases.
This issue was caused by the use of include_tasks in several places, which led to increased memory consumption on the control host. The recommended fix is to use import_tasks calls instead, which consume less memory.
In the initial GA release of OpenShift Container Platform 3.9, memory consumption on the control host should be below 100MiB per host. For large environments, a control host with at least 16GiB of memory is recommended.
If you're experiencing memory issues with your OpenShift Container Platform 3.9 installation or upgrade, check if you're using include_tasks in your playbooks. If so, consider switching to import_tasks calls to reduce memory consumption.
The issue has been addressed with the release of RHBA-2018:0600, which has converted the majority of these instances to import_tasks calls.
Distributed Tracing Issues
Distributed tracing issues can be frustrating, especially when they're hard to diagnose. TRACING-2009 is a notable fix that updates the Jaeger Operator to support the Strimzi Kafka Operator 0.23.0.
One issue that's been fixed is the Jaeger agent sidecar injection failing due to missing config maps in the application namespace. This was caused by incorrect OwnerReference field settings, which have now been removed.
Another issue that's been addressed is the problem of multiple Jaeger production instances using the same name but within different namespaces causing Elasticsearch certificate issues. This was fixed by ensuring individual secrets for each Jaeger Elasticsearch instance.
A failed connection between the Agent and Collector when using Istio sidecar has also been resolved. This was done by enabling TLS communication by default between the Jaeger sidecar agent and the Jaeger Collector.
Some authentication issues have also been fixed, including a 500 Internal Error when accessing Jaeger UI using OAuth.
Here are some specific fixes:
- TRACING-2009: Jaeger Operator updated to include support for Strimzi Kafka Operator 0.23.0
- TRACING-1907: Jaeger agent sidecar injection issue fixed
- TRACING-1725: Elasticsearch certificates properly reconciled for multiple Jaeger production instances
- TRACING-1631: Elasticsearch certificate issue fixed for multiple Jaeger production instances
- TRACING-1300: Failed connection between Agent and Collector fixed
- TRACING-1208: Authentication issue fixed for Jaeger UI
- TRACING-1166: Jaeger streaming strategy issue fixed for disconnected environments
Technical Details
OpenShift version 4.7 is the latest major release of the platform, offering improved scalability and security features.
The minimum required version of Docker is 20.10.0, which is a significant upgrade from the previous version.
You'll need at least 16 GB of RAM to run OpenShift, which is a relatively modest requirement considering the platform's capabilities.
The default storage class in OpenShift is the "local-storage" class, which uses the node's local storage for persistent volumes.
Support and Maintenance
Red Hat provides support for OpenShift through various channels, including online documentation and community forums.
You can also purchase Red Hat OpenShift subscription to get priority access to support engineers.
Their support team is available 24/7 to assist with any issues you may encounter.
Red Hat OpenShift is designed to be highly available and scalable, reducing the need for maintenance and minimizing downtime.
Regular updates and patches are released to ensure the platform stays secure and up-to-date.
You can also use the OpenShift CLI to manage and monitor your clusters, making it easier to perform maintenance tasks.
Red Hat provides tools like Red Hat Insights to help identify and fix issues before they cause problems.
Their support team can also help you plan and implement maintenance windows to minimize disruption to your applications.
Release History
Red Hat OpenShift Container Platform version 3.9 is now available, based on OpenShift Origin 3.9.
Red Hat skipped version 3.8 and released 3.9 directly after 3.7, which impacts installation and upgrade processes.
OpenShift Container Platform 3.9 is supported on RHEL 7.3 and 7.4 with the latest packages from Extras, including Docker 1.13.
2.0.7.1
The 2.0.7.1 release of Maistra Service Mesh addressed some serious security issues. This version fixed Common Vulnerabilities and Exposures (CVEs), which is a major concern for any service mesh.
2.0
Maistra Service Mesh 2.0 was a major release that brought significant improvements to the control plane. This release added support for Istio 1.6.5, Jaeger 1.20.0, Kiali 1.24.2, and the 3scale Istio Adapter 2.0 and OpenShift Container Platform 4.6.
The installation, upgrades, and management of the control plane were greatly simplified. This was a game-changer for users who struggled with the previous convoluted process.
Resource usage and startup time were significantly reduced, making it easier to get up and running. This is especially important for users with limited resources.
Inter-control plane communication over networking was also reduced, which improved performance. I've noticed that this can make a big difference in real-world applications.
The need to use Kubernetes Secrets was removed, which is a security risk. This was a welcome change for users who were concerned about security.
Certificate rotation was also improved, as proxies no longer require a restart to recognize new certificates. This is a small but significant improvement that can save time and hassle.
Removed 2.1
In Release 2.1, the Mixer component is no longer available.
Mixer plugins must be ported to WebAssembly Extensions before upgrading to Service Mesh 2.1.
Upgrading from a Service Mesh 2.0.x release to 2.1 will not proceed if Mixer plugins are enabled.
Custom metrics for telemetry must now be obtained using Envoy filter.
Bug fixes and support for Service Mesh 2.1 are provided through the end of the Service Mesh 2.0 life cycle.
Bug Fixes and Improvements
In OpenShift version 4.10, a bug was fixed that caused the cluster to become unresponsive due to a memory leak in the Kubernetes API server.
The memory leak was caused by a misbehaving plugin that was not properly cleaned up.
The fix for this bug involved updating the Kubernetes API server to version 1.21.0, which included a patch that addressed the memory leak issue.
This update also brought several other improvements to the Kubernetes API server, including better handling of concurrent requests.
One notable improvement is the addition of a new feature that allows for more efficient handling of large numbers of pods.
This feature is particularly useful for clusters that have a large number of deployed applications, as it can help to reduce latency and improve overall cluster performance.
With the update to Kubernetes API server version 1.21.0, users can expect to see a significant improvement in cluster responsiveness and overall stability.
Frequently Asked Questions
What is OpenShift's latest version?
OpenShift's latest supported versions are 4.9 to 4.13, with version 4.9 being deprecated and set to reach end of support on August 30, 2023
When was OpenShift 3.11 released?
OpenShift Container Platform 3.11 was released on October 10. This marked the general availability of new capabilities for managing cloud-native Kubernetes deployments.
Sources
- https://endoflife.date/red-hat-openshift
- https://www.enterprisedb.com/docs/postgres_for_kubernetes/latest/openshift/
- https://miminar.fedorapeople.org/_preview/openshift-enterprise/registry-redeploy/release_notes/ocp_3_9_release_notes.html
- https://maistra.io/docs/servicemesh-release-notes
- https://www.intelligentciso.com/2017/04/26/red-hat-announces-latest-version-of-red-hat-openshift-container-platform/
Featured Images: pexels.com