Azure Pipelines Conditions Essentials for Streamlined CI/CD

Author

Reads 719

Woman in focus working on software development remotely on laptop indoors.
Credit: pexels.com, Woman in focus working on software development remotely on laptop indoors.

Azure Pipelines Conditions are a powerful tool that allows you to control the flow of your Continuous Integration and Continuous Deployment (CI/CD) pipeline.

They can be used to skip or fail a build based on various conditions, such as the branch being built or the presence of specific files.

In Azure Pipelines, conditions are evaluated on a per-job basis, which means you can have different conditions for different jobs in the same pipeline.

This flexibility is essential for streamlined CI/CD, as it allows you to adapt to changing project requirements without modifying the entire pipeline.

Conditional Logic

Conditional Logic is a powerful tool in Azure Pipelines that allows you to control the flow of your pipeline based on specific conditions.

You can specify custom conditions for each stage using the optional 'condition' property, which will override the implicit condition that the previous stage must succeed.

Adding a condition to a stage will remove the implicit condition, so it's common to use a condition of 'and(succeeded(),yourCustomCondition)' to add the implicit success condition back.

Credit: youtube.com, How to use Conditional stage in Azure DevOps pipeline?

Conditions for failed and succeeded stages work only for YAML pipelines, and you can use them to run a stage based on the status of a previous stage.

You can also use custom conditions to specify the conditions under which each stage runs with expressions, such as checking the outcome of a previous stage.

If you customize the default condition of the preceding steps for a stage, you remove the conditions for completion and success, so it's common to use and(succeeded(),custom_condition) to check whether the preceding stage ran successfully.

By using conditional logic, you can run a job when one of two or more paths change, such as modifying either your Templates or Parameters directories for ARM templates.

Triggering Pipelines

You can manually trigger a pipeline stage by adding the trigger: manual property to a stage. This allows you to have a unified pipeline without always running it to completion.

For instance, you might have a pipeline with stages for building, testing, and deploying to a staging environment, but you prefer to deploy to production manually.

Having a manual trigger for production deployment gives you more control over when your application is released to the public.

In Azure Pipelines, you can add a manual trigger to a stage by specifying the trigger: manual property.

Advanced Conditions

Credit: youtube.com, Azure DevOps Pipeline Conditions | Pipeline Conditions

If one Stage fails, the Pipeline is finished and all other Stages will be skipped by default. This is because each Stage has an implicit condition that states the previous Stage must succeed.

You can add your own custom check to determine if a Stage will run or not by using the optional 'condition' property. This will remove the implicit condition that says the previous Stage must succeed.

To add the implicit success condition back, you can use a condition of 'and(succeeded(),yourCustomCondition)'. This will ensure the Stage runs only if the previous Stage succeeds and your custom condition is met.

Adding a condition to a Stage means it will run regardless of the outcome of the preceding Stage, unless you include the implicit success condition.

Pipeline Structure

Pipeline Structure is built around three main components: Stages, Jobs, and Steps. A Stage is a logical grouping of related Jobs.

To effectively organize your pipeline, consider breaking down complex tasks into smaller, more manageable Jobs within each Stage. This structure helps to streamline your workflow and make it easier to manage.

Defining Stages

Credit: youtube.com, Stages explained in Azure Pipelines | Azure DevOps

Defining stages is a crucial part of creating a pipeline structure. A pipeline must contain at least one stage with no dependencies.

You can organize your pipeline into stages, jobs, and steps. Stages run sequentially by default, but you can control the order by adding dependencies.

The syntax for defining multiple stages and their dependencies is: Example stages that run sequentially:Example stages that run in parallel:Example of fan-out and fan-in:

To define stages, you control the dependencies by setting triggers on each stage. Stages run with a trigger or by being manually started.

Here are the different types of triggers: Stages run with a trigger or by being manually started.With an After release trigger, a stage starts as soon as the release starts, in parallel with other stages that have After release trigger.With an After stage trigger, a stage will start after all the dependent stages complete.

Steps (Aka Tasks)

Steps (Aka Tasks) are the building blocks of your pipeline, and they're essentially packaged scripts or procedures that help you run complicated processes with ease.

Credit: youtube.com, Task Pipeline

Tasks are defined by a set of inputs, and they're designed to be abstracted, making it simple to run them without worrying about the underlying work.

Each Task is a packaged script or procedure, and they can be used to perform a wide range of tasks, from Android builds to Docker builds.

There are many built-in Tasks provided by Microsoft, and you can also install custom tasks from the Visual Studio Marketplace.

Some built-in Tasks have shortcut syntaxes, like the 'Cache' Task, which can be referenced by 'saveCache' or 'restoreCache'.

Inside each Job, Steps run sequentially, and there's no way to change this order.

Each Step has an implicit condition that states "run if we're in a successful state", but you can use the 'condition' property to specify your own custom check.

The 'continueOnError' property can be set to true, giving a 'success' signal to the next Step, even if a failure occurs.

3 Basic Pipelines

Credit: youtube.com, What are some common data pipeline design patterns? What is a DAG ? | ETL vs ELT vs CDC (2022)

You can create a basic pipeline in Azure DevOps by updating your azure-pipelines.yml file with jobs to read example files and output their contents using PowerShell tasks.

First, you'll need to add the pipeline trigger and agent pool to be used. This will determine which jobs will run and when. To do this, add the following code to your azure-pipelines.yml file.

Next, create a job to check for modified files and publish variables indicating that files in the path have changed. This job will make available variables check_modified.FilesChanged and check_modified.ServicesChanged, which can be used to set conditions for subsequent jobs to run.

To check for changes to files in the Files path, you can use a job with a condition that checks the check_modified.FilesChanged variable. If this variable is true, the job will run.

By using this approach, you can ensure that only jobs or tasks that are affected by file modifications are executed. This can help improve the scalability of your pipeline and reduce clutter in your determine_changes job.

Elaine Block

Junior Assigning Editor

Elaine Block is a seasoned Assigning Editor with a keen eye for detail and a passion for storytelling. With a background in technology and a knack for understanding complex topics, she has successfully guided numerous articles to publication across various categories. Elaine's expertise spans a wide range of subjects, from cutting-edge tech solutions like Nextcloud Configuration to in-depth explorations of emerging trends and innovative ideas.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.