To configure a pipeline in Azure DevOps for multi-stage builds, you'll need to create a YAML file that defines the stages and jobs for your build process. This YAML file is the backbone of your pipeline configuration.
In Azure DevOps, you can create a new pipeline by clicking on the "New pipeline" button in the Pipelines section of your project. From there, you can choose to create a pipeline from scratch or use a template to get started.
The YAML file for a multi-stage pipeline will have multiple stages, each representing a different environment or deployment target. For example, you might have a "build" stage that compiles your code, a "test" stage that runs automated tests, and a "release" stage that deploys your application to a production environment.
Azure DevOps provides a range of YAML variables that you can use to customize your pipeline configuration. For example, you can use the $(Build.SourceBranch) variable to get the name of the source branch that triggered the build.
Setting Up DevOps
Setting Up DevOps can be a bit overwhelming, but it doesn't have to be. A pipeline is comprised of Stages, Jobs, and Steps.
To create a pipeline, you'll need to understand the hierarchy, which is outlined in the Microsoft documentation for Azure Pipelines. The pipeline hierarchy is straightforward: a stage contains multiple jobs, and jobs contain multiple steps.
Just be sure to keep an eye on the required indents and dashes when creating a pipeline. There are syntax checker add-ons in Visual Studio Code that can help prevent errors, making the process much smoother.
Pipeline Triggers
Pipeline Triggers are a powerful feature in Azure DevOps that allow you to automate the execution of pipelines based on specific events. You can specify multiple trigger types for a pipeline, including Continuous Integration (CI) triggers and pipeline completion triggers.
To combine CI triggers and pipeline triggers, you'll need to ensure that they don't trigger multiple runs of the same pipeline. For example, if pipeline B has a pipeline completion trigger configured for the completion of pipeline A, making a push to the repository will start new runs of both A and B. To prevent this, you can disable the CI trigger or pipeline trigger in pipeline B.
By understanding how pipeline triggers work, you can create efficient and automated workflows in Azure DevOps. This will save you time and effort in the long run, allowing you to focus on more important tasks.
Resource Triggers
Resource Triggers are an essential part of Pipeline Triggers, allowing you to automate your workflow based on available resources.
Resource Triggers can be set up to trigger a pipeline when a specific resource is available, such as a database connection or a file upload.
For example, a pipeline can be triggered when a new file is uploaded to a specific folder, making it easy to automate tasks that rely on that file.
Resource Triggers can also be used to check if a resource is available before proceeding with a pipeline, preventing errors and failures.
This can be particularly useful when working with external services or APIs that may be down or unavailable at times.
Resource Triggers can be set up to retry failed connections or checks, ensuring that your pipeline is resilient and can recover from temporary issues.
By using Resource Triggers, you can create a more efficient and reliable workflow that's less prone to errors and failures.
Combining Trigger Types
Specifying both CI triggers and pipeline triggers in your pipeline can lead to multiple runs being started at the same time.
If you have multiple pipelines in the same repository, each with its own CI triggers, a new run of each pipeline will be started whenever a push is made that matches the filters of the CI trigger.
For example, if you make a push to the repository, a new run of pipeline A is started based on its CI trigger, and a new run of pipeline B is started based on its CI trigger.
If pipeline B has a pipeline completion trigger configured for the completion of pipeline A, another run of pipeline B will be started when pipeline A completes.
To prevent triggering multiple runs of the same pipeline, you must disable its CI trigger or pipeline trigger.
Pipeline Configuration
To create a pipeline in Azure DevOps, you'll need to start by creating a file with a .yml extension, like "pipeline.yml". This file can be placed anywhere in your project, but it's often best to keep it at the top level.
The file will have a stage, a job, and up to six steps, but let's start with just the first step. You'll need to specify the internal name of the stage, job, and task, which can't have spaces and may not be descriptive. Use displayName to add a more descriptive name that will be displayed in Azure DevOps.
You'll also need to specify the pool/vmImage, which refers to the virtual machine (agent) the build will run on. You can choose between a Microsoft-hosted agent or a private agent. For now, let's go with a Microsoft-hosted agent, which is free and has various images available, such as the latest version of Windows.
Here are the available options for the Dot Net Core installer task:
- Microsoft-hosted agent: free, various images available
- Private agent: requires setup and configuration
- Windows: latest version available
- .Net Core SDK: latest version of major version 3 (e.g. 3.x)
What You'll Need
To get started with pipeline configuration, you'll need a few essential tools. An Azure subscription is free to sign up for. You'll also need an Azure DevOps account, which is also free to sign up for.
Azure Repos is a great option for your Git repository, and it's connected to Azure Pipelines. For an Azure App Services Plan, you can use the free tier, which will give you two app services. Having an IDE, such as Visual Studio Code, will also be helpful, especially if you're using the Pipeline syntax highlighting extension.
Branch Filters
Branch filters allow you to specify which branches to include or exclude when configuring a trigger.
You can use branch filters to trigger a pipeline for different branches by including all the branch filters for which the parent pipeline is triggered.
To specify branch filters, use the prefix refs/heads/ if your filters aren't working, for example, use refs/heads/releases/old* instead of releases/old*.
Branch filters can be used to trigger a pipeline for specific branches, such as releases/*, or for a combination of branches, like releases/* and main.
If you want to exclude certain branches, you can specify them in the branch filters, for example, releases/old*.
Stage Filters
You'll want to use the stages filter to trigger your pipeline when one or more stages of the triggering pipeline complete. This requires Azure DevOps Server 2020 Update 1 or greater.
The stages filter allows you to specify multiple stages, and the triggered pipeline will run when all of the listed stages complete.
This means you can create a pipeline that runs a series of stages, and then triggers another pipeline to run when all those stages are complete.
Plan the Build
Planning the build is a crucial step in creating a pipeline. To create the final artifact, you'll need to install build requirements, restore dependencies, build, test, publish, and create a build artifact.
The build process can be broken down into several steps, which should be included in a single stage and job. This is because all the steps are dependent on the previous step, making parallel execution unnecessary.
Here's a list of the steps you'll need to include in your pipeline:
- Install build requirements
- Restore dependencies (in this case, NuGet packages)
- Build
- Test
- Publish (create application packages)
- Create build artifact (to be used in future stages)
These steps will ensure that your pipeline is set up correctly and that your application is properly built and tested before being deployed.
Authentication Parameters
Authentication Parameters are a crucial part of pipeline configuration, and Azure DevOps offers several options to authenticate with your organization.
You can use a Personal Access Token or Managed identity via TriggerAuthentication configuration.
For Personal Access Token Authentication, you'll need to provide the organization URL and Personal Access Token (PAT) for Azure DevOps.
The organization URL is the URL of your Azure DevOps organization, which is a required parameter.
A Personal Access Token (PAT) is a secure way to authenticate with Azure DevOps, and it's used in conjunction with the organization URL.
If you're using a Personal Access Token, you can also specify the personalAccessTokenFromEnv or personalAccessTokenFrom parameter.
Here's a breakdown of the parameters you'll need for Personal Access Token Authentication:
Alternatively, you can use Pod Identity Authentication, which is a more secure way to authenticate with Azure DevOps.
Azure AD Workload Identity providers can be used for Pod Identity Authentication.
Supporting Demands
You can use demands in your agent scaler to ensure that jobs are run on the right agents. If you don't want to use demands, the scaler will simply scale based on the pool's queue length.
Demands are useful when you have agents with different capabilities in the same pool. For example, in a kube cluster, you might have agents supporting dotnet5, dotnet6, java, or maven.
To use demands, you can specify a comma-separated list of capabilities that your agent supports. For example, maven, java, make. Note that Agent.Version is ignored.
If requireAllDemands is set to true, KEDA will only scale if the job's demands are fulfilled exactly by a trigger. This means a job with demands maven will not match an agent with capabilities maven, java.
Here's a summary of how demands work:
- Specify a comma-separated list of capabilities in the demands parameter.
- KEDA will determine which agents can fulfill the job based on the demands provided.
- If requireAllDemands is set, KEDA will only scale if the job's demands are fulfilled exactly by a trigger.
Note that if more than one scaling definition can fulfill the demands of the job, they will both spin up an agent.
Pipeline Deployment
Pipeline deployment is a crucial step in the Azure DevOps pipeline process. It involves deploying the packaged code from the build stage to the production environment.
In Azure DevOps, the deployment stage is a specially named job that allows for additional options, including deployment history and deployment strategies. This stage is used to deploy the code to the staging environment.
The strategy section in the deployment stage has a variety of lifecycle hooks that can be used in different deployment strategies. For this walkthrough, we're using the simplest strategy of RunOnce.
Here are the key steps in the deployment stage:
- The deployment stage extracts the files from the zip created in the build stage.
- The deployment stage deploys those files to an Azure App Service.
- The deployment stage has a property named environment, which is set to 'Staging' or 'Production' depending on the environment being deployed to.
The deployment stage can be configured to run sequentially or in parallel depending on the dependencies set up. In the case of the MercuryWorks pipeline, the deployment stage runs in parallel with the build stage.
To deploy to the production environment, the stage and job names, as well as the name of the web app being deployed to, must be updated to indicate they are for production. The dependsOn section must also be updated to indicate a dependency on the build stage and the staging stage.
Here are some key differences between the deployment stage for staging and production:
- The deployment stage for production has a dependency on the build stage and the staging stage.
- The deployment stage for production has a different environment property set to 'Production'.
- The deployment stage for production uses the RunOnce strategy with a deploy lifecycle hook.
Deployment Options
A pipeline is a collection of stages, and stages can run sequentially or in parallel depending on how you set dependencies up.
You can have multiple jobs in a stage, and all jobs run in parallel, which reduces the overall time to complete the stage. In the build stage, we have three different jobs: one to build and create the application artifact, one to build and create the functional test artifact, and one to create the infrastructure artifact.
The deployment stage is specially named and allows for additional options, including deployment history and deployment strategies. This stage is used to deploy code to the staging infrastructure.
The environment property is set to 'Staging' in the deployment stage, and this can be named according to your own environment naming strategy. The strategy section has a variety of lifecycle hooks that can be used in different deployment strategies.
Here are some key deployment options to consider:
- RunOnce: This strategy executes each lifecycle hook once, and then runs an on: success or on: failure hook depending on the result.
- Deploy: This lifecycle hook is used to deploy files to an Azure App Service.
- Extract files from a zip: This step extracts files from a zip that was created in the build stage.
Sources
- https://learn.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers
- https://mercuryworks.com/blog/creating-a-multi-stage-pipeline-in-azure-devops
- https://codefresh.io/learn/azure-devops/azure-pipelines-the-basics-and-creating-your-first-pipeline/
- https://keda.sh/docs/2.14/scalers/azure-pipelines/
- https://www.javatpoint.com/azure-devops-pipeline
Featured Images: pexels.com