openshift pipelines 教程: 从基础到实践

Author

Reads 946

Woman in focus working on software development remotely on laptop indoors.
Credit: pexels.com, Woman in focus working on software development remotely on laptop indoors.

在 Openshift Pipelines 教程中,我们会从基础开始学习。我们需要了解什么是 Openshift Pipelines,它的组成部分包括 Source、Build、Deploy 和 Test 等。

Openshift Pipelines 是一个用于自动化应用程序的 CI/CD 工具。它可以帮助我们将源代码从开发环境中自动部署到生产环境中。

通过使用 Openshift Pipelines,我们可以实现自动化的部署和测试流程。这样可以大大减少人工的错误和浪费时间。

Setup

To set up OpenShift Pipelines, you need to login into your OpenShift environment. You can do that using the oc-CLI providing the URL to your OpenShift cluster and a username and password, but it's more convenient to do it through the console.

You can log on to the console, click on your username, and choose Copy Login Command. This will provide you with the necessary credentials to access your OpenShift environment.

To create a project, you can use the command `oc new-project fuse-pipelines`.

OC Environment

To set up your OC environment, you'll need to create a script that maintains your OpenShift-Pipelines artifacts. I created a script in a GitHub project called openshift-pipelines that you can fork and modify to suit your needs.

Several commands and YAML files refer to identifiers from other steps, so you'll need a way to share them between commands and YAML files. A separate script called bin/oc_env.sh.tpl is created to do this.

Credit: youtube.com, What are Environment Variables, and how do I use them? (get,set)

Make a copy of this script to bin/oc_env.sh and modify the settings to suit your situation. I added oc_env.sh to my .gitignore file to prevent committing credentials.

Several artifacts are defined through YAML files, which contain references to values in the bin/oc_env.sh script. To replace these references, you can use a handy Linux tool called envsubst, which is installed by default in Red Hat-based Linux distributions.

You'll need to copy the YAML file to a .tpl file, replace the references with Environment variables, and then feed the template file to envsubst and forward the output to another file. Make sure that the variables named in the template file are exported, so they're known in the session that executes envsubst.

With these preliminary setup steps, you're ready to take the first steps for setting up the OpenShift Pipeline.

Workspace/Persistence Volume Claim

In OpenShift Pipelines, Workspaces help tasks share data, and allow you to specify one or more volumes that each task in the pipeline requires during execution.

Credit: youtube.com, How To Mount secret & configmap as persistent volume to a container in OpenShift - Lesson 9C

You can create a persistent volume claim or provide a volume claim template that creates a persistent volume claim for you. This is useful for tasks that work on a shared set of files, such as checking out code, building sources, and building container images.

To create a persistence volume claim, you can use the `workspaces/create-pvc.sh` script, which uses the `workspaces/sources-pvc.yaml.tpl` YAML template.

You can specify Workspaces in the TaskRun or PipelineRun using a read-only ConfigMaps or Secret, an existing PersistentVolumeClaim shared with other Tasks, a PersistentVolumeClaim from a provided VolumeClaimTemplate, or an emptyDir that is discarded when the TaskRun completes.

A Pipeline can define as many Workspaces as required, and a Task definition can include as many Workspaces as it requires. However, it is recommended that a Task uses at most one writable Workspace.

Here are the ways to specify Workspaces in a PipelineRun:

In the build-deploy-api-pipelinerun PipelineRun, a volume claim template is used to create a persistent volume claim for the shared-workspace Workspace. This is done by specifying the name of the Workspace and a volume claim template that creates a persistent volume claim.

Trigger

Credit: youtube.com, Openshift build trigger using openshift webhooks - red hat

Trigger is a crucial component in OpenShift pipelines, allowing you to automate tasks and workflows. It's composed of TriggerTemplate, TriggerBindings, and interceptors.

To define a Trigger, you'll need to create a TriggerTemplate, which is a resource with parameters that can be substituted anywhere within the resources of a template. This is done by applying the TriggerTemplate YAML file, as seen in Example 3, "Trigger Template".

TriggerBindings are used to capture fields from an event and store them as parameters, which can then be replaced in the TriggerTemplate. The definition of our TriggerBinding is given in Example 4, "Trigger Binding".

You'll also need to define a Trigger, which combines TriggerTemplate, TriggerBindings, and interceptors. The definition of our Trigger is given in Example 2, "Trigger".

To trigger a pipeline run, you'll need to start a pipeline and tie it to the persistentVolumeClaim and params that should be used for this specific invocation. This is done using the "tkn" command, as seen in Example 1, "Trigger Pipeline".

Credit: youtube.com, Using OpenShift Pipelines with Webhook Triggers

Here's a step-by-step overview of the trigger pipeline run process:

  1. The configured webhook in the vote-api GitHub repository pushes the event payload to our route (exposed EventListener Service).
  2. The Event-Listener passes the event to the TriggerBinding and TriggerTemplate pair.
  3. TriggerBinding extracts parameters needed for rendering the TriggerTemplate.
  4. Successful rendering of TriggerTemplate creates 2 PipelineResources (source-repo-vote-api and image-source-vote-api) and a PipelineRun (build-deploy-vote-api).

Note that you'll need to expose the EventListener service as an OpenShift Route to make it publicly accessible, as seen in Example 6, "Event Listener".

Pipeline Configuration

Pipeline Configuration is a crucial step in setting up OpenShift Pipelines. To configure webhooks, you need to fork the backend and frontend source code repositories to get sufficient privileges to configure GitHub webhooks.

You can test this by pushing a commit to the vote-api repository from the GitHub web UI or the terminal. To configure webhooks manually, go to your GitHub Repo, navigate to Settings>Webhooks, and you should see a webhook configured on your forked source code repositories.

To define and create pipeline tasks, you need to create a Maven Task with a single step to build a Maven-based application, and then add two reusable tasks from the catalog repository. To do this, you can use the following commands: $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/pipeline/update_deployment_task.yaml and $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/pipeline/apply_manifest_task.yaml.

Credit: youtube.com, OpenShift Pipelines Tutorial using Tekton

Here's a list of tasks that can be created:

  • apply-manifests
  • update-deployment
  • buildah
  • s2i-python-3

To define and create pipeline resources, you need to create a resources.yaml file with the pipeline resources that contain the specifics of the Git repository and the image registry to be used in the pipeline during execution. Then, you can create the pipeline resources using the command $ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/pipeline/resources.yaml.

Create Secret from Personal Access Token

To create a secret from a Personal Access Token, you can follow these steps. Use a separate system account to use, rather than your private account, to avoid issues if you need to change your password.

First, navigate to your User Settings in your GitLab or GitHub account and go to the Access Tokens area. You can then click on the Generate Token button to create a new Personal Access Token.

Select the scopes you want the token to access, such as repo:status, public_repo, and notifications. Take a note of the token, preferably in a tool like KeePass or LastPass, as you won't be able to see it again.

Credit: youtube.com, How To Generate A Personal Access Token In GitHub And Use It To Push Using Git

Fill in your GitHub username and password in the corresponding fields under the stringData node in your clone of the serviceaccount/github-secret.yml.tpl file. You can also use the oc create secret subcommand as an alternative to the YAML approach.

The serviceaccount/create-github-secret.sh script annotates the Secret with tekton.dev/git-0=https://github.com, which is needed to enable Tekton to use it with github.com.

Create Service Account

To create a service account, use the serviceaccount/create-github-serviceaccount.sh script, which works similarly to the serviceaccount/create-github-secret.sh script.

This script uses the serviceaccount/github-service-account.yml.tpl file, which is a template file created by first creating a service account using the description at https://redhat-scholars.github.io/tekton-tutorial/tekton-tutorial/private_reg_repos.html.

The service account will refer to the secret created before and function as a credential in Kubernetes to execute pipelines.

In a later stage, we'll have to grant role-privileges to the service account.

The YAML file is created by transforming it from the OpenShift console, and it's used to set the secret using a JSON snippet.

List Directory Task

Credit: youtube.com, Say Goodbye to Makefile - Use Taskfile to Manage Tasks in CI/CD Pipelines and Locally

The List Directory Task is a useful pipeline configuration that allows you to check the working of different parts of the pipeline(s).

It's defined in the tasks/list-directory-task.yml file, which lists the directory contents of the workspace recursively through all sub-directories and also lists the README.md.

You can create the task using the tasks/create-list-dir-task.sh script, which first deletes the task if it already exists and then recreates it using the YAML file.

This task is a first sample pipeline that involves checking out or cloning a Git repository, making it a great starting point for exploring pipeline configurations.

Defining and Creating Resources

To define and create pipeline resources, you need to create a resources.yaml file that contains the specifics of the Git repository and the image registry to be used in the pipeline during execution.

You can create Pipeline Resources by running the command `oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/pipeline/resources.yaml`. This command creates the required resources for the pipeline to function properly.

Credit: youtube.com, 4K CICS resource definitions as part of the dev ops pipeline

Pipeline Resources are artifacts that are used as inputs to a Task and can be output by a Task. They are essential for the pipeline to work efficiently.

Here are the Pipeline Resources that you can create:

To create these resources, you can also use the command `tkn resource ls` to list the available resources and their details.

Java and Knative Service Template

To create a Java and Knative Service Template, you'll first need to deploy it in an OpenShift namespace. This is necessary for the template to be visible in the ODC Add Pipeline option.

The template needs to be identified or tagged as being used with Java runtime applications. This is a crucial step, as it allows the template to be associated with the correct type of application.

The APP_NAME parameter is automatically set by OpenShift to the Name (Greeter Application) of the application created via ODC. This happens seamlessly behind the scenes.

Credit: youtube.com, Knative Introduction for the Curious Java Developer

To use the template with Knative service type applications, you'll need to identify or tag it as such. This ensures that the template is correctly linked to the right type of application.

Here are the key steps to create a Java and Knative Service Template:

Project Preparation

To start working with OpenShift pipelines, you'll need to create a new project. This can be done by clicking the "Create" button after entering the project details, and then navigating to the newly created project.

The project is created in a namespace called "pipelines-demos", which is where all exercises in this chapter will take place. You can verify this by checking the namespace in the OpenShift console.

To execute certain commands, you'll need to use the "oc" command, so make sure you're in the "pipelines-demos" project by running the command "oc project pipelines-demos".

Cluster Configuration

In OpenShift Pipelines, cluster tasks are a type of task that are global to the platform, meaning they're not tied to a specific namespace.

These tasks can be listed using the tkn clustertask ls command, which shows the preseeded cluster tasks that come with OpenShift Pipelines.

The git-clone ClusterTask is a useful one to start with, but for build&deploy pipelines, the buildah and/or maven ClusterTasks might be more interesting.

Required Cluster

Credit: youtube.com, Understand the Basic Cluster Concepts | Cluster Tutorials for Beginners

The Cloud Native Application pipeline requires a specific set of Cluster Tasks to run smoothly. These tasks are essential for the pipeline's functionality.

The required Cluster Tasks include Git Clone, which is a preseeded ClusterTask that can be listed with the tkn clustertask ls command. This task is used for the first pipeline.

Maven and Buildah are also required Cluster Tasks for build&deploy pipelines. They can be listed with the tkn clustertask ls command, along with other preseeded ClusterTasks.

OpenShift has all these tasks installed as ClusterTasks as part of the openshift-pipelines install. The Openshift Client and Kn client are also required Cluster Tasks.

Here are the required Cluster Tasks for the Cloud Native Application pipeline:

  • Git Clone
  • Maven
  • Buildah
  • Openshift Client
  • Kn client

Update Cluster

Updating your cluster is a crucial step in ensuring everything runs smoothly. To update cluster tasks, you'll need to run the following command to update kn, maven, and buildah cluster tasks.

The command to run is: "run the following command to update them". This will ensure you have the latest versions of these tasks.

If you're experiencing issues with outdated tasks, updating them will likely resolve the problem. You can then move forward with your cluster configuration.

Create PVC

Credit: youtube.com, Kubernetes Volumes explained | Persistent Volume, Persistent Volume Claim & Storage Class

Creating a PVC is an essential step in setting up your cluster configuration. We'll be using the PVC tekton-tutorial-sources for our exercises in this chapter and the next ones.

This PVC will serve as a source for our Tekton pipelines. We'll be referencing it later on in our cluster configuration.

To create the PVC, we'll need to follow the instructions provided earlier. We'll be using the PVC tekton-tutorial-sources as part of our exercises.

The PVC tekton-tutorial-sources will be used for our upcoming exercises. This will help us to better understand how to configure our cluster.

By creating the PVC tekton-tutorial-sources, we're setting the stage for our cluster configuration exercises. This is an important step in our process.

System Configuration

When setting up an OpenShift pipeline, it's essential to configure the environment correctly.

You can create a new pipeline by specifying the source repository URL and the branch you want to build.

To configure the pipeline, you need to define the pipeline's name, source repository, and branch.

Credit: youtube.com, OpenShift 4 CI/CD Pipelines

In the OpenShift web console, navigate to the Pipelines page and click on the "Create Pipeline" button.

The pipeline configuration includes specifying the source repository URL and the branch you want to build, as well as defining the pipeline's name.

Make sure to select the correct Git repository and branch to ensure the pipeline is building the correct code.

To define the pipeline's name, you can use a unique name that identifies the pipeline.

In the pipeline configuration, you can also specify the image stream tag and the build configuration.

The image stream tag is used to reference the Docker image that will be built, and the build configuration defines the build process.

Margarita Champlin

Writer

Margarita Champlin is a seasoned writer with a passion for crafting informative and engaging content. With a keen eye for detail and a knack for simplifying complex topics, she has established herself as a go-to expert in the field of technology. Her writing has been featured in various publications, covering a range of topics, including Azure Monitoring.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.