
Azure DevOps stage dependencies allow you to define the order in which stages of your CI/CD pipeline run, ensuring that each stage completes successfully before the next one begins.
This is crucial for maintaining the integrity of your pipeline and preventing errors from propagating downstream. By controlling the flow of your pipeline, you can ensure that each stage has the necessary inputs to run smoothly.
In Azure DevOps, you can define stage dependencies by specifying the stages that must complete before another stage can start. This is done using the "depends on" field in the stage settings.
Stage dependencies can be used to implement complex pipeline workflows, such as running multiple stages in parallel or in a specific order.
Dependencies Across
You can reference outputs from a job in a previous stage using the stageDependencies context.
This requires a specific syntax, as shown in Example 5. You can use this to create complex workflows where jobs in one stage depend on the output of jobs in another stage.
The ProjectMarker stage is a great example of how to use this. It creates an environment variable AffectedProjects that can be used in the next stages to determine if a project needs to build and deploy.
Here's a breakdown of how to use the stageDependencies context:
- Use the stageDependencies context to reference outputs from a job in a previous stage.
- Use the specific syntax shown in Example 5 to reference the output variable.
- Create a new environment variable to store the affected projects, as shown in the affected-projects-marker.js script.
Here's a table summarizing the different types of dependencies:
Conditions and Triggers
Conditions and Triggers are a powerful tool in Azure DevOps, allowing you to specify the conditions under which each stage runs with expressions. This means you can customize the behavior of your pipeline stages to run based on specific criteria.
You can force a stage to run even if a previous stage fails, or specify a custom condition. However, if you customize the default condition of the preceding steps for a stage, you remove the conditions for completion and success. To address this, it's common to use and(succeeded(),custom_condition) to check whether the preceding stage ran successfully.
A custom condition can be used to run a stage based on the status of a previous stage. For instance, if you want to deploy to a staging environment only when the previous stage has completed successfully, you can use a custom condition to achieve this.
You can also use manual triggers to have a unified pipeline without always running it to completion. This is useful when you want to have a pipeline that runs automatically for some stages, but requires manual triggering for others. For example, you might want all stages to run automatically except for the production deployment, which you prefer to trigger manually when ready.
To use manual triggers, simply add the trigger: manual property to a stage. This will allow you to manually trigger the stage when you're ready.
Stage dependencies can also be used as stage conditions. This is useful for complex pipelines where you want to skip a later stage based on a much earlier stage for which there is no direct dependency. However, a side effect of using a stage condition is that many subsequent stages have to have their execution conditions edited.
To avoid this, you can set a condition at the job or task level. This can be done by creating a local alias and checking the condition on that. This technique works for both Agent-based and Agent-Less (Server) jobs.
If you need to use a stage dependency variable in a later stage, as a job condition or script variable, but don't wish to add a direct dependency between the stages, you could consider 'republishing' the variable as an output of the intermediate stage(s).
Here are some examples of how to use conditions and triggers in Azure DevOps:
Deployment Strategies
Azure DevOps provides several deployment strategies to help you manage the rollout of application updates. A rolling deployment replaces instances of the previous version of an application with instances of the new version on a fixed set of virtual machines in each iteration.
To achieve this, you can use lifecycle hooks that run steps during deployment. These hooks resolve into agent jobs or server jobs, depending on the pool attribute.
In a rolling deployment, you can configure the strategy by specifying the keyword "rolling:" under the strategy: node. This allows you to control the number or percentage of virtual machine targets to deploy to in parallel.
Here are the lifecycle hooks supported in a rolling deployment:
- preDeploy
- deploy
- routeTraffic
- postRouteTraffic
- on: success
- on: failure
You can also use the maxParallel variable to control the number or percentage of virtual machine targets to deploy to in parallel. This ensures that the app is running on these machines and is capable of handling requests while the deployment is taking place on the rest of the machines.
A canary deployment strategy is an advanced deployment strategy that helps mitigate the risk involved in rolling out new versions of applications. By using this strategy, you can roll out the changes to a small subset of servers first.
The following variables are available in a canary deployment strategy:
- strategy.name: Name of the strategy, for example, canary.
- strategy.action: The action to be performed on the Kubernetes cluster, for example, deploy, promote, or reject.
- strategy.increment: The increment value used in the current interaction.
In a canary deployment strategy, you can use the preDeploy lifecycle hook (executed once) and iterate with the deploy, routeTraffic, and postRouteTraffic lifecycle hooks. It then exits with either the success or failure hook.
Build and Deployment
Building and deploying your application on Azure DevOps can be a complex process, but with the right strategies, you can streamline your workflow. Deployment jobs use the $(Pipeline.Workspace) system variable.
To deploy application updates, you need to use lifecycle hooks that can run steps during deployment. These lifecycle hooks resolve into agent jobs or server jobs, depending on the pool attribute. By default, the lifecycle hooks will inherit the pool specified by the deployment job.
To only build the projects that are affected, you can use the AffectedProjects environment variable inside a condition. This variable is accessible with stageDependencies.ProjectMarker.Mark.outputs['Marker.AffectedProjects'].
Here's a step-by-step guide to accessing the AffectedProjects environment variable in the next stages:
- ProjectMarker is the name of the first stage
- Mark is the name of the job in the first stage
- Marker is the name of the task that executes the affected-projects-marker.js script
To use a custom-defined environment variable of another stage, you need to explicitly add the stage to the dependsOn property of the stage that wants to use the environment variable.
To deploy affected projects, you can create a new Deploy stage and add a dependency on the ProjectMarker stage. Then, add a condition to the job to check if the job's project is added to the AffectedProjects environment variable.
Manual Control
Manual Control allows you to run pipeline stages manually, giving you more flexibility in your workflow.
You can manually trigger a YAML pipeline stage by adding the trigger: manual property to it. This way, you can choose when to run the stage, rather than having it run automatically.
A unified pipeline can be achieved without always running it to completion, which is especially useful when you have stages for building, testing, and deploying to different environments.
In the example, the development stage runs automatically, while the production stage requires manual triggering, allowing you to control when it runs.
By using manual triggers, you can have more control over your pipeline and only run the stages that are necessary at the time.
Approval and Queuing
Azure DevOps allows you to control the execution of stages in your pipeline using approval and queuing policies. You can manually sequence and control the order of execution by using manual approvals.
Manual approvals can be added at the start or end of each stage in the pipeline, and are commonly used to control deployments to production environments.
If you don't specify a limit for the number of parallel deployments, all approval requests are sent out as soon as the releases are created, and if the approvers approve all of the releases, they'll all be deployed in parallel. However, if you specify a limit and choose to deploy all in sequence, the predeployment approval for the first release will be sent out first, and execution will continue in sequence for all releases.
Here are the queuing policy options you can choose from:
- Number of parallel deployments: limits the number of parallel deployments
- Deploy all in sequence: deploys releases one after the other
- Deploy latest and cancel the others: deploys the latest release and cancels the others
Specify Approvals
Specify approvals can be a game-changer in controlling deployments to production environments. Manual approval checks are supported on environments, allowing you to define checks that must be satisfied before a stage consuming that resource can start.
You can add manual approvals at the start or end of each stage in the pipeline. This gives you flexibility in deciding when a stage should run.
Manual approval checks are a mechanism available to the resource owner to control if and when a stage in a pipeline can consume a resource. For more information, see Approvals.
By adding manual approvals, you're essentially creating a gatekeeper for your pipeline stages. This ensures that only authorized personnel can proceed with a stage.
Specify Queuing Policies
YAML pipelines don't support queuing policies, which means each run is independent and unaware of other runs. This can lead to multiple pipelines executing the same sequence of stages without waiting for each other.
To manually sequence and control the order of execution, you can use manual approvals. This is recommended until queuing policies are available for YAML pipelines.
Queuing policies give you control over how multiple releases are queued into a stage, which is useful when generating builds faster than they can be deployed or when configuring multiple agents.
You can choose from two queuing policy options: Number of parallel deployments and Deploy all in sequence, Deploy latest and cancel the others.
Here are the options in more detail:
- Number of parallel deployments: This option limits the number of parallel deployments, but doesn't specify what happens when the limit is reached.
- Deploy all in sequence: When the limit is reached, releases are deployed in sequence, with each release's predeployment approval sent out before the deployment begins.
- Deploy latest and cancel the others: When the limit is reached, releases are skipped, and the predeployment approval for the latest release is sent out immediately after the post-deployment approval for the previous release is completed.
If you specify a limit and choose Deploy all in sequence, the predeployment approval for release R1 will be sent out first, followed by the deployment of release R1 to the QA stage. Next, a request for post-deployment approval is sent out for release R1, and it's only after this is completed that execution of release R2 begins.
Project Management
Project management in Azure DevOps is all about planning and tracking your project's progress. It's where you create a work breakdown structure and define tasks, dependencies, and timelines.
You can create a project plan in Azure DevOps using the Agile or Scrum framework, which helps you prioritize and track tasks. This framework is particularly useful for teams working on complex projects.
To create a project plan, you need to define your project's scope, timeline, and budget. You also need to identify the team members responsible for each task and assign them to specific tasks.
Azure DevOps allows you to create and manage multiple projects, each with its own plan and schedule. This is useful for teams working on multiple projects simultaneously.
Dependencies in Azure DevOps are used to link tasks together and define their order of execution. This ensures that tasks are completed in the correct order and prevents errors.
By using dependencies, you can create a logical workflow that shows how tasks are related and how they impact each other. This makes it easier to track progress and identify potential roadblocks.
In Azure DevOps, you can create dependencies between tasks using the "Predecessors" field. This field allows you to specify the tasks that must be completed before a task can be started.
Dependencies can be used to create complex workflows with multiple tasks and conditions. This is particularly useful for projects with many interdependent tasks.
By using dependencies effectively, you can streamline your project workflow and improve collaboration among team members. This leads to faster project delivery and higher quality results.
Sources
- https://learn.microsoft.com/en-us/azure/devops/pipelines/process/stages
- https://blogs.blackmarble.co.uk/rfennell/using-azure-devops-stage-dependency-variables-with-conditional-stage-and-job-execution/
- https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions
- https://learn.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs
- https://timdeschryver.dev/blog/how-to-make-your-azure-devops-ci-cd-pipeline-faster
Featured Images: pexels.com