Azure Data Factory CI/CD with DevOps Pipelines

Author

Reads 900

An artist's illustration of artificial intelligence (AI). This image represents storage of collected data in AI. It was created by Wes Cockx as part of the Visualising AI project launched ...
Credit: pexels.com, An artist's illustration of artificial intelligence (AI). This image represents storage of collected data in AI. It was created by Wes Cockx as part of the Visualising AI project launched ...

Azure Data Factory CI/CD with DevOps Pipelines is a game-changer for data integration and processing. By integrating Azure Data Factory with DevOps Pipelines, you can automate the build, test, and deployment of your data pipelines.

This integration enables continuous integration and continuous deployment (CI/CD), allowing you to release changes to your data pipelines faster and with greater confidence. Azure Data Factory CI/CD with DevOps Pipelines is a powerful tool for data engineers and analysts.

Azure Data Factory supports CI/CD pipelines through Azure DevOps Pipelines, which enables you to automate the build, test, and deployment of your data pipelines. This integration also supports Azure Pipelines, which provides a scalable and secure way to automate your data pipelines.

Prerequisites

To get started with Azure Data Factory CI/CD, you'll need an Azure Subscription with the ability to create resource groups and resources with the "Owner" role assignment.

Having some basic knowledge of creating Azure Data Factory pipelines is also beneficial, as it will help you navigate the process more smoothly.

Credit: youtube.com, Complete Azure Data Factory CI/CD Process (DEV/UAT/PROD) with Azure Pipelines

You'll need to have the "Owner" privileges to create a service principal that provides DevOps access to your Data Factories within your Resource groups. Without this, you won't be able to proceed.

Azure Data Factory CI/CD requires a solid understanding of Azure and its various tools, so make sure you have a good grasp of these concepts before diving in.

Setting Up

To set up your Azure environment, create three resource groups and three data factories through the Azure Portal. Each pair will resemble one of the three environments.

Resource Group names must be unique within your subscription, but Data Factories must be unique across Azure. This can be done through the Azure Portal.

If you're already familiar with the Azure portal, you can skip this step by running a PowerShell script from the GitHub Repository. Just make sure to update the variables within the script accordingly.

On top of the Azure home page, click on “Create a resource” to begin the process.

UAT and Prod Resource Groups

Credit: youtube.com, 12.Continuous integration and delivery in azure data factory | CI/CD in ADF explained

Creating UAT and Prod Resource Groups is a crucial step in setting up your Azure Data Factory CI/CD pipeline. To do this, you'll need to create two new Resource Groups, one for UAT and one for Production.

Follow the same steps you used to create your first Resource Group to create the UAT Resource Group. Once created, you should see it in your Azure portal.

Similarly, create the PROD Resource Group using the same process as before. You'll also see it listed in your Azure portal once it's created.

With all three Resource Groups - Dev, UAT, and Prod - in place, you'll be able to manage your Azure Data Factory pipeline more effectively.

Factories

Creating Azure Data Factories is a crucial step in building a robust Azure Data Factory CI/CD pipeline. To start, you'll need to create Data Factories in each respective Resource Group, using a naming scheme that includes your initials.

The naming scheme for Data Factories is "Initials-warehouse-dev-df". However, since Azure Data Factory names must be unique across all of Azure, you might need to add a random number(s) to the end of your initials to ensure uniqueness.

Your Project

Credit: youtube.com, IIoT Based Smart Factory 4.0 Over The Cloud (Final Year Project)

You've created an organization in DevOps and a project for Azure Data Factory. This project will contain your repository for Azure Data Factory.

The project is named "Azure Data Factory" and is set to private visibility. You've also selected Git for version control and Basic for work item process.

Microsoft has extensive documentation on different types of version controls and work item processes.

You should explore the options within the project, particularly the "Repos" and "Pipelines" services visible in the left menu.

Here's a quick rundown of the key settings for your project:

Now that your project is set up, you can start creating pipelines and connecting your Azure Data Factory to your Git repository.

The YAML

The YAML, a crucial component of Azure Pipelines, is a file that contains the configuration for your pipeline. It's where you declare variables, stages, and tasks that will be executed during the pipeline run.

A Pipeline file starts with a variables block, where you declare variables that will be used throughout the pipeline. In this example, variables are stored in Azure DevOps variable groups, which correspond to different environments like dev and pre-prod.

Credit: youtube.com, Data Factory CICD YAML Pipeline

The variables needed in Variable groups are azure_subscription_id, azure_service_connection_name, resource_group_name, and azure_data_factory_name. Two additional variables, adf_code_path and adf_package_file_path, are also present in the pipeline. adf_code_path is the path in the repo where your ADF code is stored, while adf_package_file_path is the path in the repo where the package.json file is present.

Before creating the pipeline, you'll need to create a package.json file, which contains details to obtain the ADFUtilities package. This file is created in a 'build' folder, and the package.json file contains a specific code block that will be used by the NPM package to find the ADFUtilities package.

Here are the key variables you'll need to declare in your Pipeline file:

  • azure_subscription_id
  • azure_service_connection_name
  • resource_group_name
  • azure_data_factory_name
  • adf_code_path
  • adf_package_file_path

ADF Modes

ADF Modes are a crucial aspect of working with Factories. ADF consists of two modes: Live Mode and Git Mode.

In Git Mode, you'll need to select two branches: Collaboration branch and Publish branch. The Collaboration branch is where all feature branches are merged, while the Publish branch is where changes, including auto-generated ARM templates, get published.

Credit: youtube.com, 72. Different Author Modes in Azure Data Factory

The Publish branch is automatically created as 'adf_publish' by default. You'll also find a corresponding 'adf' folder in your repository, which contains all the ADF resources.

Here are the two modes in a nutshell:

Adding Custom Parameters in ARM Templates

Adding custom parameters in ARM templates can be a bit tricky, but don't worry, I've got you covered. You can't override a parameter if it's not present in the ARM template parameters file.

To get the parameter in the ARM template parameters file, you need to edit the parameter configuration in the ADF Portal. This involves navigating to the Manage tab and clicking on Edit parameter configuration to load the JSON file.

You'll need to go to the specific section where your parameter is located, such as the Linked Service section. From there, you can add the parameter under the typeProperties section.

Here are the steps in more detail:

  1. Navigate to ADF Portal and go to Manage Tab.
  2. Under the ARM Template section, Click on Edit parameter configuration to load the JSON file.
  3. Go to the required section, for example, “Microsoft.DataFactory/factories/linkedServices”.
  4. Under typeProperties, add the parameter you want to come in ARM Template parameter file.
  5. Click on Ok, which will generate a file called “arm-template-parameters-definition.json” in the repo where ADF code is present.
  6. Run the Pipeline again, and you will see the new parameter in the Template Parameter file.

This process is a bit tedious, but it's worth it in the end. The new parameter will be included in the ARM template parameters file, making it easier to deploy to new environments.

Create a

Credit: youtube.com, How to Create Azure Custom Role for Azure Data Factory | Azure Data Factory Tutorial 2021

To create a Data Factory, you'll need to follow these steps. First, ensure that GIT is Enabled while creating the new Data Factory from the pre-requisites section.

Navigate to the newly created DEV Data Factory in the desired resource group, and click Author & Monitor to launch the Data Factory authoring UI.

To create a pipeline, click the pencil icon, then click the plus icon, and finally click Pipeline from the list of options.

Here's a list of the necessary details to enter when creating a Data Factory:

  • GIT account details
  • Repo details
  • Resource group details

Note that Azure Data Factory names must be unique across all of Azure, so you might need to add a random number(s) to the end of your initials for it to be unique.

You can also create a feature branch in ADF Portal by following the development flow for ADF:

1. Create a feature branch from your collaboration branch.

Credit: youtube.com, Create a Data Pipeline in Azure Data Factory from Scratch DP-900 [Hands on Lab]

2. Develop and manually test your changes in the feature branch.

3. Create a PR from the feature branch to the collaboration branch in GitHub.

4. Once the PR is merged, the changes will be deployed in ADF Dev Environment.

To publish your changes, click the Publish button, and ADF will create a new branch called adf_publish inside your repository and publish the changes to ADF directly.

CI/CD

CI/CD is a crucial part of Azure Data Factory (ADF) development, ensuring that changes are deployed smoothly and efficiently. The recommended CI/CD flow for ADF involves creating a pipeline in Azure DevOps that builds and deploys ADF resources.

Each user makes changes in their private branches, and then creates a pull request to merge the changes into the master branch. The Azure DevOps pipeline build is triggered every time a new commit is made to master, validating the resources and generating an ARM template as an artifact if validation succeeds.

Credit: youtube.com, Tutorial Setup CI/CD for Azure Data Factory

The DevOps Release pipeline is configured to create a new release and deploy the ARM template each time a new build is available. This ensures that changes are automatically deployed to the development environment.

Here's a step-by-step breakdown of the CI/CD process:

  • Create a pipeline in Azure DevOps that builds and deploys ADF resources.
  • Each user makes changes in their private branches and creates a pull request to merge the changes into the master branch.
  • The Azure DevOps pipeline build is triggered every time a new commit is made to master.
  • The pipeline validates the resources and generates an ARM template as an artifact if validation succeeds.
  • The DevOps Release pipeline is configured to create a new release and deploy the ARM template each time a new build is available.

By following this CI/CD flow, developers can ensure that changes are deployed smoothly and efficiently, reducing the risk of errors and improving overall productivity.

Katrina Sanford

Writer

Katrina Sanford is a seasoned writer with a knack for crafting compelling content on a wide range of topics. Her expertise spans the realm of important issues, where she delves into thought-provoking subjects that resonate with readers. Her ability to distill complex concepts into engaging narratives has earned her a reputation as a versatile and reliable writer.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.