openshift deployment best practices and configurations

Author

Reads 954

Computer server in data center room
Credit: pexels.com, Computer server in data center room

To ensure a smooth and efficient OpenShift deployment, it's essential to follow best practices and configure your environment correctly.

Start by defining a clear deployment strategy, taking into account the number of replicas, scaling requirements, and resource allocation. This will help you avoid common pitfalls and ensure your application is always available.

Use a consistent naming convention for your resources, including projects, namespaces, and services. This makes it easier to manage and troubleshoot your deployment.

Implement a robust monitoring and logging system to track performance, errors, and security issues. This will help you identify and address problems promptly.

Prerequisites

Before we dive into the world of OpenShift deployment, let's make sure we have the necessary prerequisites in place.

You'll need to allocate roughly 15 minutes to get everything set up.

To begin, you'll need an Integrated Development Environment (IDE).

JDK 17+ needs to be installed, along with JAVA_HOME configured appropriately.

Apache Maven 3.9.9 is also required.

If you want to use the Quarkus CLI, you'll need to have it installed as well.

Access to an OpenShift cluster is necessary, and Minishift is a viable option if you don't have one already set up.

Finally, the OpenShift CLI is optional, but required for manual deployment.

Project Setup

Credit: youtube.com, How To Deploy An Application into Openshift Using Container Registry? (From Private Gitlab Registry)

To set up your project for OpenShift deployment, start by creating a new project that contains the OpenShift extension. This can be done using the following command.

You can create a Gradle project by adding the --gradle or --gradle-kotlin-dsl option.

If you're using cmd, make sure to put everything on the same line without using backward slash \. If you're using Powershell, wrap -D parameters in double quotes, like this: "-DprojectArtifactId=openshift-quickstart".

The OpenShift extension is a wrapper extension that brings sensible defaults to the Kubernetes extension, making it easier to get started with Quarkus on OpenShift.

Here are some key options to keep in mind:

  • For cmd, use the command on one line without a backward slash.
  • For Powershell, wrap -D parameters in double quotes.

By adding the OpenShift extension to your command, you'll also add the following dependency to your pom.xml.

Bootstrapping the Project

To start a new project for Quarkus on OpenShift, you'll need to create a project that contains the OpenShift extension. This can be done using the following command.

If you're using cmd, you'll want to put everything on the same line without using backward slashes. On the other hand, if you're using Powershell, you'll need to wrap the -D parameters in double quotes. For example, "-DprojectArtifactId=openshift-quickstart".

Credit: youtube.com, How to download Bootstrap and use it in your project | Bootstrap Offline Setup

The command for creating a Gradle project is a bit more specific. You can add the --gradle or --gradle-kotlin-dsl option to create a Gradle project.

Here are the specific options you can use:

  • –gradle: Create a Gradle project
  • –gradle-kotlin-dsl: Create a Gradle Kotlin DSL project
  • –DbuildTool=gradle: Create a Gradle project
  • –DbuildTool=gradle-kotlin-dsl: Create a Gradle Kotlin DSL project

By adding the OpenShift extension to your command line invocation, you'll get a dependency added to your pom.xml file. This dependency is actually a wrapper extension that configures the Kubernetes extension with sensible defaults.

Build

To build your application, you can trigger a build and deployment in a single step using the following command: `mvn clean package -Dquarkus.openshift.route.expose=true`. This will build your application locally and then trigger a container image build and apply the generated OpenShift resources automatically.

The generated resources use a Kubernetes Deployment, but still make use of OpenShift specific resources like Route, BuildConfig, etc. As of OpenShift 4.14, the DeploymentConfig object has been deprecated, so you'll need to use Kubernetes resources instead.

You can also build your application locally and then configure the OpenShift application manually if you need more control over the deployment configuration. To do this, you'll need to build the container image first and then manually configure the OpenShift application.

Credit: youtube.com, Setup Tips for Your Next Programming Project

Here are the steps to manually configure the OpenShift application:

1. Build the container image using the `quarkus.container-image.build=true` property.

2. Delete the binding to create it again after the new deployment.

3. Manually configure the OpenShift resources using the `quarkus.container-image.group` property.

Note that the service is not exposed to the outside world by default, so you'll need to expose it manually using the `oc expose` command.

Here are the steps to expose the service manually:

1. Expose the service using the `oc expose` command.

2. Get the list of exposed routes using the `oc get` command.

3. Access your application using the exposed route.

Once the build is done, you can create a new application from the relevant ImageStream.

Building Blocks

Deployments and deployment configs are enabled by the use of native Kubernetes API objects ReplicaSet and ReplicationController, respectively, as their building blocks. This means you don't have to manipulate replication controllers, replica sets, or pods owned by DeploymentConfig objects or deployments.

Computer server in data center room
Credit: pexels.com, Computer server in data center room

If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy.

A Deployment object serves as a descendant of the OpenShift Container Platform-specific DeploymentConfig object. This object describes the desired state of a particular component of an application as a pod template.

Deployments create replica sets, which orchestrate pod lifecycles. For example, a deployment definition can create a replica set to bring up one hello-openshift pod.

Here's a summary of the building blocks of a deployment:

By leveraging these building blocks, you can create a robust and scalable deployment strategy that meets your application's needs.

Octopus Deploy Connection

To connect Octopus Deploy to your project, you'll first need to create an account. This will give you access to the necessary tools and features.

Creating an account is a straightforward process that sets the stage for a successful project setup.

Credit: youtube.com, Create an Azure DevOps Service Connection to Octopus Deploy

Next, you'll need to add a Kubernetes cluster target, which is achieved by following the same procedure as for any other K8s cluster.

This involves linking your cluster to Octopus Deploy, allowing you to manage and deploy your applications with ease.

By following these simple steps, you'll be well on your way to setting up a robust and efficient project.

Deployment Process

Kubernetes provides a first-class, native API object type in OpenShift Container Platform called Deployment, which serves as a descendant of the OpenShift Container Platform-specific DeploymentConfig object.

To deploy an application, you'll need to create a Deployment object that describes the desired state of a particular component of your application as a pod template. This object will create a replica set, which orchestrates pod lifecycles.

You'll also need to specify the namespace you're deploying to, which is the K8s namespace that will be used to deploy your application. For example, if your project name is testproject, you'll need to add a value of testproject to the Project.Kubernetes.Namespace variable.

The Deployment object will create a replica set to bring up the desired number of pods, such as one hello-openshift pod in the example provided. You may need to make minor changes to your YAML file, like changing the Load Balancer resource, to ensure it's compatible with OpenShift.

Route Configuration

Credit: youtube.com, Setting up Route53 for OpenShift DNS (step-by-step)

Route Configuration is a crucial step in deploying your application on OpenShift. You can expose your Route for the Quarkus application by passing the `quarkus.openshift.route.expose` property as a command line argument.

To do this, use the following command: `./mvnw clean package -Dquarkus.openshift.route.expose=true`. This will expose your Route without needing to add the property to your `application.properties` file.

To secure the incoming connections, you can use TLS termination. This can be done by adding the `quarkus.openshift.route.tls` property to your configuration.

Environment Variables from Fields

You can use the value from another field to add a new environment variable by specifying the path of the field to be used as a source.

This feature allows you to create dynamic environment variables based on the values of other fields in your resources.

To use this feature, you need to specify the path of the field to be used as a source, as follows:PrerequisitesBootstrapping the projectLog Into the OpenShift ClusterBuild and DeploymentCustomizingKnative - OpenShift ServerlessConfiguration Reference

Here's an example of how to extract a value identified by the keyName field from the my-secret Secret into a foo environment variable:

Credit: youtube.com, OpenShift Environment Variables in Deployment Config | OpenShift Tutorial Part 5

The following extracts a value identified by the keyName field from the my-secret Secret into a foo environment variable:

This would generate the following in the env section of your container:

The following extracts a value identified by the keyName field from the my-config-map ConfigMap into a foo environment variable:

You can use this feature to create dynamic environment variables based on the values of other fields in your resources.

Resource Management

Resource Management is a crucial aspect of OpenShift deployment. It involves managing the resources allocated to applications, ensuring they run efficiently and effectively.

To effectively manage resources, you can use the OpenShift Resource Quota feature, which allows you to set limits on the amount of resources an application can consume. This helps prevent resource wastage and ensures that applications have the necessary resources to function properly.

By setting resource quotas, you can also monitor and control the usage of resources by individual applications, making it easier to identify and address any resource-related issues. This helps maintain a healthy and balanced environment within the OpenShift cluster.

Modifying Generated Resources

A complex network of cables in a data center with a monitor in the foreground.
Credit: pexels.com, A complex network of cables in a data center with a monitor in the foreground.

You can change the type of deployment resource generated by Quarkus, such as choosing a DeploymentConfig, StatefulSet, Job, or CronJob resource.

To generate a Job resource, add the property quarkus.openshift.arguments to your application.properties file with the arguments you want to use, like quarkus.openshift.arguments=A,B.

You can also configure the rest of the Kubernetes Job configuration using properties under quarkus.openshift.job.xxx.

If you want to generate a CronJob resource, you need to specify a Cron expression using the property quarkus.openshift.cron-job.schedule, otherwise the build will fail.

You can configure the rest of the Kubernetes CronJob configuration using properties under quarkus.openshift.cron-job.xxx.

Mounting Volumes

Mounting volumes is a crucial aspect of resource management, allowing you to configure both volumes and mounts for your application.

You can add a mount to your pod for a volume with a simple configuration. This will map the volume to a specific path, such as /where/to/mount.

CockroachDB deployments often use persistent volumes to store data, which are typically file-system mounts mapped to disks or SSDs.

To manage persistent volumes effectively, most CockroachDB clusters implement a single PVC that's assigned to each node in a stateful set.

Replication Controllers

Credit: youtube.com, Kubernetes Controller Explained | Types of Controllers | Workload Resources

Replication controllers are a fundamental concept in resource management, ensuring that a specified number of replicas of a pod are running at all times.

A replication controller configuration consists of three main components: the number of replicas desired, a pod definition to use when creating a replicated pod, and a selector for identifying managed pods.

The number of replicas desired can be adjusted at runtime, allowing for dynamic scaling of resources.

A replication controller uses a selector to determine how many instances of the pod are already running in order to adjust as needed.

The replication controller does not perform auto-scaling based on load or traffic, but rather requires its replica count to be adjusted by an external auto-scaler.

Here's a breakdown of the key elements of a replication controller configuration:

By understanding how replication controllers work, you can effectively manage your resources and ensure that your applications are always running as needed.

Comparing Objects

Credit: youtube.com, PowerShell S1E37 (Compare-object)

In OpenShift Container Platform, you have the option to use either Deployment or DeploymentConfig objects for resource management.

Deployment objects are the recommended choice unless you specifically need a feature or behavior provided by DeploymentConfig objects.

Both Deployment and DeploymentConfig objects are supported in OpenShift Container Platform, giving you flexibility in your resource management approach.

If you do choose to use DeploymentConfig objects, be aware that they are provided by OpenShift Container Platform specifically, and not a standard Kubernetes feature.

The differences between Deployment and DeploymentConfig objects are worth considering to ensure you're using the right tool for the job.

Create Service Account

To create a service account, you'll need to navigate to the User Management section of your OpenShift project. Expand User Management to access the necessary options.

Clicking on Service Accounts will lead you to the page where you can create a new service account. From there, click on Create Service Account to begin the process.

Ensure you're in the correct project by running a command in your terminal. You can do this by checking the project name in the URL or by using the `oc project` command to switch to the correct project.

Frequently Asked Questions

What are OpenShift deployments?

OpenShift deployments define the desired state of a component, creating and managing ReplicaSets to deploy and manage containerized applications. They describe how to create or modify pods that hold a containerized application.

What is the difference between OpenShift deployment and Deploymentconfigs?

OpenShift Deployments prioritize availability over consistency, while DeploymentConfigs prioritize consistency. This fundamental difference impacts how each handles rollout processes

How to deploy a service in OpenShift?

To deploy a service in OpenShift, use the oc new-app command to create a new application. This is the first step in the deployment process, which you can then monitor and test as outlined in subsequent steps.

Walter Brekke

Lead Writer

Walter Brekke is a seasoned writer with a passion for creating informative and engaging content. With a strong background in technology, Walter has established himself as a go-to expert in the field of cloud storage and collaboration. His articles have been widely read and respected, providing valuable insights and solutions to readers.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.