How to Create Azure Pipeline with YAML for Continuous Integration

Author

Posted Nov 20, 2024

Reads 751

Woman in focus working on software development remotely on laptop indoors.
Credit: pexels.com, Woman in focus working on software development remotely on laptop indoors.

Creating an Azure pipeline with YAML for Continuous Integration is a straightforward process.

To get started, you need to have a basic understanding of Azure DevOps and YAML syntax.

The first step is to create a new pipeline in Azure DevOps, which can be done by clicking on the "New pipeline" button in the Pipelines section of your project.

Choose the "Empty job" template to start with a blank slate.

Next, you need to specify the trigger for your pipeline, which can be a Git branch, a schedule, or a manual trigger.

Broaden your view: Azure Devops Create New Area

Pipeline Setup

To set up your Azure pipeline with YAML, you'll need to understand the pipeline hierarchy, which consists of Stages, Jobs, and Steps. A stage contains multiple jobs, and jobs contain multiple steps.

A pipeline is structured as follows: Stages -> Jobs -> Steps. To create a pipeline, you'll need to use the required YAML syntax, which can be found in the Microsoft documentation for Azure Pipelines. Be sure to keep an eye on the indents and dashes when creating your pipeline.

Credit: youtube.com, Create Your First Azure Pipeline using YAML | Azure DevOps Tutorial | An IT Professional

You can view and configure pipeline settings from the More actions menu on the pipeline details page. The pipeline settings pane allows you to configure settings such as processing of new run requests, YAML file path, and automatically linking work items included in the run.

Pipeline Requirements

To set up a pipeline, you'll need an Azure subscription, which is free to sign up for. You'll also need an Azure DevOps account, another free sign-up.

A Git repository is necessary, and Azure Repos is a good option. You can connect any repository to Azure Pipelines, but we'll be using Azure Repos in this walkthrough. An Azure App Services Plan with two app services is also required, and there's a free tier available.

An IDE is necessary for development, and Visual Studio Code is a good choice, especially since it has extensions for Pipeline syntax highlighting. A base project is also needed, and we'll be using a .Net Core API project throughout this series. You can find our base project here.

Curious to learn more? Check out: Azure Storage Service

Credit: youtube.com, Azure DevOps Tutorial for Beginners | CI/CD with Azure Pipelines

To create a pipeline, you'll need to understand the pipeline hierarchy and supported YAML syntax. A pipeline is comprised of stages, jobs, and steps, with stages containing multiple jobs and jobs containing multiple steps. The YAML syntax is structured with required indents and dashes, and there are syntax checker add-ons in Visual Studio Code to help prevent errors.

Here's a breakdown of the pipeline structure:

  • Stages
  • Jobs
  • Steps

Note that stages can run sequentially or in parallel depending on how you set dependencies up, and jobs in a stage all run in parallel with tasks within a job running sequentially.

Plan the

To plan a pipeline, we need to break down the process into manageable steps. The goal is to create an artifact that can be deployed.

The first step is to install the build requirements. This is a crucial part of the process, as it sets the foundation for the rest of the pipeline.

Credit: youtube.com, Azure DevOps Tutorial for Beginners | CI/CD with Azure Pipelines

The next step is to restore dependencies, which in this case includes NuGet packages. This ensures that all the necessary components are in place for the build process.

The build process itself is the next step, where the application is compiled and ready for testing.

Testing is a critical step, as it verifies that the application works as expected.

The publish step creates application packages, which are the final output of the build process.

Creating a build artifact is the final step, which will be used in future stages of the pipeline. This artifact is the culmination of all the previous steps and is what will be deployed.

Here are the steps in a single stage and job, which is a good approach when the steps are dependent on each other:

  1. Install build requirements
  2. Restore dependencies (in this case, NuGet packages)
  3. Build
  4. Test
  5. Publish (create application packages)
  6. Create build artifact (to be used in future stages)

Writing and Testing Code

To write and test code for your Azure pipeline, you'll need to create a YAML file that defines the build and test process. This file is called the pipeline YAML.

Credit: youtube.com, Azure DevOps: Adding unit tests and code coverage to YAML pipelines

The pipeline YAML file should contain a trigger that specifies when the pipeline should run, such as when code is pushed to a specific branch. For example, you can use the "git" trigger to run the pipeline on every push to the main branch.

The build process in the pipeline YAML file should include a step to restore NuGet packages, which are essential for your .NET project. This can be done using the "NuGet restore" task.

Start Writing Code

You can name the stages, jobs, and tasks in your pipeline whatever you like, but keep in mind that internal names can't have spaces. displayName is a useful feature that allows you to add a more descriptive name that will be displayed in Azure DevOps.

To run your build, you'll need to specify a virtual machine (agent) using the pool/vmImage option. You have two choices: a Microsoft-hosted agent or a private agent. For now, let's go with Microsoft, as you can get a free agent.

Credit: youtube.com, getting started with pytest (beginner - intermediate) anthony explains #518

Here are some details to keep in mind when choosing a virtual machine:

  • Microsoft-hosted agent: You can get one for free.
  • Private agent: You'll need to set one up yourself.

The first task in your pipeline should be the Dot Net Core installer task. This task will ensure that a specific version of the .Net Core SDK is installed, which is compatible with your application. You can specify the version you want to install using the syntax ‘3.x’, which will install the latest version of major version 3.

Start Testing

To start testing your code, you need to build a new pipeline in Azure DevOps. Select 'Pipelines' in the navigation and then 'New pipeline'.

First, you'll need to set up a connection to your code repository, which in this case is Azure Repos Git. Once the connections are made, the pipeline setup will be the same.

Configure your pipeline by selecting Existing Azure Pipelines YAML file. This will allow you to pick the branch and the path to the YAML file for your pipeline.

Credit: youtube.com, Software Testing Explained in 100 Seconds

You can see the options for pre-made builds, which can be useful starting points. For now, select the branch and the path to the YAML file for your pipeline.

To run the pipeline, click the blue Run button once you've selected the file. You'll see a screen with the build information and a drill down into the currently running job.

If you don't have a passing build, it's time to troubleshoot. Double check that the syntax in YAML is correct, as a single error can prevent the pipeline from running.

Here are the steps to build a new pipeline:

  1. Build a New Pipeline: Select 'Pipelines' in the navigation and then 'New pipeline'.
  2. Where is your code?: Set up a connection to your code repository, which in this case is Azure Repos Git.
  3. Configure your pipeline: Select Existing Azure Pipelines YAML file and pick the branch and the path to the YAML file for your pipeline.

Remember, a passing build is the first step to testing your code successfully.

Add Steps

Adding steps to your pipeline is a breeze. You can add more scripts or tasks as steps to your pipeline, and tasks are pre-packaged scripts that handle tasks like building, testing, publishing, or deploying your app.

For Java, the Maven task we used handles testing and publishing results. You can use a task to publish code coverage results too. To add a task, you'll need to open the YAML editor for your pipeline and add the following snippet to the end of your YAML file.

Here's the snippet you'll need:

  • task: PublishCodeCoverageResults@1
  • in
  • codeCoverageTool: "JaCoCo"
  • summaryFileLocation: "$(System.DefaultWorkingDirectory)/**/site/jacoco/jacoco.xml"
  • reportDirectory: "$(System.DefaultWorkingDirectory)/**/site/jacoco"
  • failIfCoverageEmpty: true

Save the changes and you'll be able to view your test and code coverage results by selecting your build and going to the Test and Coverage tabs.

Two Answers

Credit: youtube.com, 2 minute Guide of Writing Test Driving Development ( TDD ) Code

In YAML pipelines, there's no separation between builds and releases, so you can use stages to represent different parts of your development cycle, like build or deploy.

You can organize your pipeline by environment, such as Dev, QA, or Production, to make it more manageable.

Using a deployment job is recommended in YAML pipelines, as it's a collection of steps that run sequentially against an Azure DevOps environment.

This approach helps keep your deployment steps separate and easy to maintain.

Pipeline Configuration

A pipeline is defined using a YAML file in your repo, usually named azure-pipelines.yml and located at the root of your repo. This file is where you'll define the pipeline's configuration.

The pipeline runs on a Microsoft-hosted Linux machine and has a single step, which is to run the Maven task. The pipeline process also runs whenever your team pushes a change to the main branch of your repo or creates a pull request.

Credit: youtube.com, How to Create a YAML Pipeline in Azure DevOps (YAML Pipelines) | Azure DevOps Tutorial | Az-400

You can extend from a template and use an include template with variables to create a more complex pipeline configuration. This involves using separate files for variables and stages, which can be imported and reused across the pipeline.

Here are some key components to consider when configuring your pipeline:

  • Variables: These define virtual machine variables that can be used across the pipeline.
  • Stages: These define reusable stage configurations with parameters that can be customized.
  • Jobs: These are defined within stages and can be used to run specific tasks.

Triggers, Name, and Variables

Triggers, Name, and Variables are essential components of a pipeline configuration in Azure DevOps. They can be added to a YAML file to enhance the functionality of a multi-stage pipeline.

Specifying triggers will reduce how often the pipeline is run, and you can direct Azure DevOps to only run the build on pull requests created for a specific branch, such as the master branch. In the example, the pipeline is configured to run on pull requests created for the master branch and on a merge to the master branch.

You can customize CI triggers by replacing the trigger: step with pr: as shown in the examples. This will cause the pipeline to run for each pull request change. You can specify the full name of the branch or a prefix-matching wildcard.

Consider reading: Master Mix

Credit: youtube.com, How To Trigger Pipeline Using RULES in GitLab (From Scratch)

Adding a name to your build creates a unique name for the build, which can be modified to the format desired for your team. By default, it sets the date and the unique build ID in Azure. However, in the example, the default has been overwritten to format the date differently and add the branch name.

Variables are pipeline-specific properties that can be reused throughout the file. A variable is referenced using $(variableName) syntax. For example, the AgentImage has been converted to a variable and referenced using $(AgentImage).

Here are some examples of trigger configurations:

  • trigger: main
  • trigger: releases/*
  • pr: main
  • pr: releases/*

Azure Multi-Stage Pipeline

Azure Multi-Stage Pipeline is a powerful tool for deploying code to different environments. A pipeline is comprised of stages, jobs, and steps, and can be set up using YAML syntax.

A stage contains multiple jobs and jobs contain multiple steps. Stages can run sequentially or in parallel depending on how you set dependencies up.

Credit: youtube.com, LetsDevOps: YAML Pipeline Tutorial, Setting up CI/CD using YAML Pipeline, Multi Stage/Job Setup.

The applications we work on at MercuryWorks all have functional tests and infrastructure as code which need their own package of files to be sent to the Release. In the build stage, we end up having three different jobs: one to build and create the application artifact, one to build and create the functional test artifact, and one to create the infrastructure artifact. They all run in parallel, which reduces the overall time to complete the stage.

A deployment stage has a specially named job that allows for additional options, including deployment history and deployment strategies. The strategy section has various lifecycle hooks that can be used in different deployment strategies.

To set up the deployment pipeline, you need to create a separate stage for deployment, which will have a few new concepts compared to the build stage. The deployment stage should not run before or in parallel with the Build stage because it needs the artifact created.

Here are the key dependencies to consider when setting up your Azure Multi-Stage Pipeline:

Using Multiple Versions

Credit: youtube.com, Jenkins Pipeline For Multiple Java Versions Using Matrix Feature

You can build a project using different versions of a language by using a matrix of versions and a variable.

To build on a single platform and multiple versions, you need to add a matrix to your azure-pipelines.yml file before the Maven task and after the vmImage.strategy.

A matrix can be added with the following code: matrix: jdk10: jdkVersion: "1.10" jdk11: jdkVersion: "1.11" maxParallel: 2

You should replace the line jdkVersionOption: "1.11" with jdkVersionOption: $(jdkVersion) in your Maven task.

Make sure to change the $(imageName) variable back to the platform of your choice.

If you want to build on multiple platforms and versions, you can replace the entire content in your azure-pipelines.yml file before the publishing task with the following snippet:

You should also update the pool section to use the $(imageName) variable.

After making these changes, save and confirm them to see your build run two jobs on two different platforms and SDKs.

Create Work Item on Failure

Credit: youtube.com, Power Automate - Create Azure DevOps WorkItem and Attachments

Classic build pipelines have a Create work item on failure setting, but YAML pipelines don't. This is because YAML pipelines can be multi-stage, making a pipeline level setting less appropriate.

To implement Create work item on failure in a YAML pipeline, you can use the Work Items - Create REST API call or the Azure DevOps CLI az boards work-item create command at the desired point in your pipeline.

You can use Runtime parameters to configure whether the pipeline succeeds or fails. Set the value of the succeed parameter when manually running the pipeline.

The second job in the pipeline should have a dependency on the first job and only run if the first job fails. This ensures that the work item is created only when the pipeline fails.

The second job uses the Azure DevOps CLI az boards work-item create command to create a bug. For more information on running Azure DevOps CLI commands from a pipeline, see Run commands in a YAML pipeline.

See what others are reading: Azure Function C

Credit: youtube.com, Configuring Notifications in ADO

This approach can be used across multiple stages, not just two jobs. It's a flexible solution that works well with YAML pipelines.

You can also use a marketplace extension like Create Bug on Release failure which has support for YAML multi-stage pipelines. This can simplify the process of creating a work item on failure.

Configuring CI/CD Pipelines in Azure DevOps

Configuring CI/CD Pipelines in Azure DevOps is a crucial step in automating the build, test, and deployment process. You can define a pipeline using a YAML file in your repository, usually named azure-pipelines.yml, which is located at the root of your repository.

To create a pipeline, navigate to the Pipelines page in Azure Pipelines, select the pipeline you created, and choose Edit in the context menu to open the YAML editor for the pipeline. Examine the contents of the YAML file to understand the pipeline's structure and configuration.

A pipeline is comprised of Stages, Jobs, and Steps. Stages contain multiple jobs, and jobs contain multiple steps. The YAML syntax for defining a pipeline is specific, so be sure to keep an eye on the required indents and dashes.

Credit: youtube.com, Azure DevOps Build Pipeline | CI/CD | Create .Net Core Build Pipeline

You can configure pipeline settings from the More actions menu on the pipeline details page. This includes managing security, renaming/moving the pipeline, adding a status badge to your repository, deleting the pipeline, and configuring scheduled runs.

To configure pipeline triggers, you can specify triggers to reduce how often the pipeline is run. For example, you can direct Azure DevOps to only run the build on pull requests created for the master branch and on a merge to the master branch.

Pipeline variables are pipeline-specific properties that can be reused throughout the file. Variables are referenced using $(variableName) syntax. For example, you can use a variable to pass in the name of the image you want to use for building and testing your project.

You can build and test your project on multiple platforms using strategy and matrix. For example, you can use a variable to pass in the name of the image you want to use, such as "ubuntu-latest", "macOS-latest", or "windows-latest".

Here's an example of how to configure a pipeline to build and test on multiple platforms:

To configure this pipeline, replace the vmImage property with the following content:

Credit: youtube.com, Working with Build Pipelines of Azure pipelines for CI/CD (Azure DevOps Service 2019)

strategy:

matrix:

linux:

imageName: "ubuntu-latest"

mac:

imageName: "macOS-latest"

windows:

imageName: "windows-latest"

maxParallel: 3

pool:

vmImage: $(imageName)

Select Save and confirm the changes to see your build run up to three jobs on three different platforms. Each agent can run only one job at a time, so you'll need to configure multiple agents to run multiple jobs in parallel.

Insert a

Inserting templates in your pipeline configuration is a game-changer. You can copy content from one YAML file and reuse it in a different YAML file, saving you from having to manually include the same logic in multiple places.

This approach is especially useful when you have common steps that need to be executed in multiple pipelines. For instance, the include-npm-steps.yml file template contains steps that are reused in azure-pipelines.yml.

To use a template, it needs to exist on your filesystem at the start of a pipeline run. You can't reference templates in an artifact, so make sure to have them readily available.

This approach can significantly reduce the amount of duplicated code in your pipeline configurations, making them easier to maintain and update.

Job, Stage, and Step with Parameters

Credit: youtube.com, How to use parameters in Jenkins Pipeline | Jenkins Parameterised Job | DevOps Training - 9886611117

Job, Stage, and Step with Parameters are incredibly powerful tools in pipeline configuration. You can define templates with parameters that can be reused across different stages and jobs.

Templates can be used to define a set of reusable parameters, such as name and vmImage, that can be used to create jobs with the same template but with different parameter values.

You can define a template like npm-with-params.yml that defines two parameters: name and vmImage, and creates a job with the name parameter for the job name and the vmImage parameter for the VM image.

Here's an example of how you can use this template in your pipeline:

  • Reference the template three times, each with different parameter values referring to the operating system and VM image names.
  • Each job performs npm install and npm test steps.

The pipeline will run on a different VM image and named according to the specified OS, thanks to the parameters defined in the template.

Credit: youtube.com, Azure DevOps Stages, Jobs & Steps

Scalar parameters without a specified type are treated as strings, which means you can use them in conditional statements like eq(true, parameters['myparam']) to check if a parameter is true or false.

You can also use parameters with step or stage templates, like the example in steps-with-params.yml that defines a parameter named runExtendedTests with a default value of false.

By using parameters in your templates, you can create reusable and flexible pipeline configurations that can be easily customized to fit different use cases.

Reference Paths

Reference Paths are crucial in Pipeline Configuration. You can reference paths in two ways: absolute or relative.

To use an absolute path, the template path must start with a /. This is useful when you want to specify a path that's not dependent on the current file's location.

You can reference files within a nested hierarchy using relative paths. For example, if you have a fileA.yml that includes fileB.yml and fileC.yml, you can reference them like this:.

Alternatively, you can use absolute paths to reference files, regardless of the current file's location. This is useful when you want to avoid any potential ambiguity.

Ann Predovic

Lead Writer

Ann Predovic is a seasoned writer with a passion for crafting informative and engaging content. With a keen eye for detail and a knack for research, she has established herself as a go-to expert in various fields, including technology and software. Her writing career has taken her down a path of exploring complex topics, making them accessible to a broad audience.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.