![Man Sitting in Front of Three Computers](https://images.pexels.com/photos/4974915/pexels-photo-4974915.jpeg?auto=compress&cs=tinysrgb&w=1920)
Automation is key to a smooth DevOps process, and Azure SQL Express is no exception. Automating the entire pipeline can save you a significant amount of time and reduce errors.
With Azure SQL Express, you can automate the deployment of databases, which is a crucial step in the development process. This can be achieved through the use of Azure DevOps pipelines.
To automate the deployment of databases, you can use the Azure DevOps pipeline task for SQL Server deployment. This task allows you to deploy databases from a Git repository to Azure SQL Express.
For your interest: Create Task Group Azure Devops
Prerequisites
To get started with DevOps on Azure using SQL Express, you need to meet some prerequisites.
You'll need an Azure DevOps account, which is a must-have for this process.
Create a Resource Group in Azure either through the portal or the Azure CLI. For this example, I've named mine "rg-databaseautomation".
To connect your Azure DevOps account to your Azure subscription, you'll need to create an Azure Resource Manager Service Connection in Azure DevOps. This connection should use the same Azure subscription where you created the Resource Group.
Here's a quick rundown of the necessary steps:
- An Azure DevOps account
- A Resource Group created in Azure
- An Azure Resource Manager Service Connection in Azure DevOps
Give your Service Connection a meaningful name, like I did with "DatabaseAutomationBlogPost".
Deployment Process
The deployment process is a crucial part of the DevOps pipeline, and it's essential to break it down into manageable steps.
To start, you'll need to download the pipeline artifact, which will contain the .dacpac built in the previous stage. This will be automatically downloaded by the job.
The Azure SQL Server Firewall rules can be handled by the SqlAzureDacpacDeployment@1 task, which can also delete the rule after completion. This means you can effectively have just a single job in your deployment stage.
To establish authentication from Azure DevOps to your Azure SQL Server, you'll need to use Entra Authentication. This will involve using the service principal associated with an Azure DevOps Service connection.
The deployment stage itself will involve deploying the infrastructure, initial scripts, migration scripts, and the web application. Here's a breakdown of the steps involved:
- Deploy the infrastructure
- Deploy the initial scripts that will permission the product team as well as the application to be able to communicate with the database.
- Deploy the migration scripts we generated earlier in the build phase.
- Deploy the web application
These steps will ensure that your application is properly set up and running on your Azure SQL Server.
SQL Deployment
SQL Deployment is a crucial part of any DevOps pipeline, and Azure SQL Express is a great tool for managing your databases.
To deploy a .dacpac to Azure SQL Server, you'll need to download the pipeline artifact, open up the Azure SQL Server Firewall to the Agent, deploy the .dacpac, and delete the Azure SQL Server Firewall rule. Fortunately, the job will automatically download the pipeline artifact, and the SqlAzureDacpacDeployment@1 task can handle opening the Azure SQL Server Firewall rules.
You can use either Entra Authentication or Variable Groups to authenticate with the database, but Entra Authentication is considered more secure. To provision access to the service account, you can designate an Entra Security group as the Admin in the Azure SQL Server configuration.
Here are the steps to deploy a .dacpac to multiple environments via different configurations:
- Deploy the infrastructure
- Deploy the initial scripts that will permission the product team as well as the application to be able to communicate with the database
- Deploy the migration scripts generated earlier in the build phase
- Deploy the web application
To implement DbUp, a tool that automates database changes, you'll need to script out the database in its current state by adding a defensive initial script that's rerunnable to create the database. This allows every developer to get up and running locally and means you can have potentially deleted the dev database as it's no longer needed.
Results and Overview
Our pipeline now has two stages: build and deployment. The build stage generates an artifact of our .dacpac, while the deployment stage takes the .dacpac produced in the first stage and deploys it.
A complete YAML definition of this pipeline can be found on GitHub. The .dacpac artifact is the key to automating our database deployments.
The end-state architecture uses a variety of tools, including .NET 8, Entity Framework Core with migrations, Bicep, Azure DevOps, and Azure. This setup requires two developers to approve a pull request and a manager to approve changes before deploying the infrastructure-as-code.
Recommended read: Azure Sql Deployment Failed
Results
After implementing a two-stage pipeline, you'll see a significant improvement in your results in Azure DevOps (ADO).
A two-stage pipeline is created, where the first stage generates an artifact of the .dacpac.
This pipeline has two stages: a build stage and a deployment stage.
The build stage produces an artifact of the .dacpac, which is then used in the deployment stage.
The deployment stage takes the .dacpac produced in the first stage and deploys it.
A complete YAML definition of this pipeline is available on my GitHub repository.
Overview
The end-state architecture is built using a combination of .NET 8, Entity Framework Core with its built-in migrations, and Bicep.
This architecture is designed to satisfy a specific process that requires two developers to approve a pull request and a manager to approve changes before merging into the main branch.
A pipeline is set up to deploy the infrastructure-as-code, which creates the database and sets permissions upon merge.
Here are the key technologies used in this architecture:
- .NET 8
- Entity Framework Core
- Bicep
- Azure DevOps
- Azure
Infrastructure as Code
Infrastructure as Code is a game-changer for managing your Azure resources.
The bicep file is a crucial part of this process, and it's not too bad once you know which properties of Microsoft.Sql/servers need to be included or not.
It's an arcane mess, but with some knowledge and practice, you'll be able to create a bicep file that effectively configures your Azure resources.
Infrastructure as Code
Infrastructure as Code can be a bit of a challenge, especially when working with complex systems like Azure. The bicep file is a key part of this process.
The bicep file is where you'll define your infrastructure, and it's not too bad once you know which properties to include. For example, with Microsoft.Sql/servers, you'll need to include certain properties to get things right.
The bicep file can be an arcane mess if you don't know what you're doing. But with the right guidance, you can create a working file.
Secrets Management
In Azure Pipelines, secrets management is crucial for protecting sensitive information. You can define and populate secrets in the Azure Pipelines Library.
The Library is where you can store essential variables used by your pipeline. Variables can be marked as secret, so they are protected from accidental disclosure.
You can also integrate with Azure Key Vault to access secrets. This allows you to securely store and manage sensitive data.
Pipeline and CLI
Azure Pipelines offer a lot of flexibility and control through Multi-Stage Pipelines, which allow you to organize your pipeline into distinct stages and jobs, and associate approval requirements with specific environments.
Each job in a pipeline can execute on a distinct agent pool, opening up possibilities even in a hybrid or tightly locked down setup. This means you can have a lot of control over how your pipeline runs.
The Azure CLI is a popular tool that provides a convenient way to create a logical Azure SQL server and other resources, and it's idempotent, meaning it won't create duplicate resources if they already exist.
Broaden your view: Create Azure Sql Database
The Pipeline
A pipeline is a series of stages, each stage containing one or more jobs, which in turn comprise a set of steps that contain tasks.
Each job can execute on a distinct agent pool, offering a lot of possibilities even in a hybrid or tightly locked down setup.
Azure Pipelines are made up of stages, jobs, steps, and tasks, which are all connected to create a pipeline.
A simple example of a complete build stage might look like the following: a series of tasks that compile, test, and deploy code.
Intriguing read: Azure Pipelines Task
Pipeline artifacts are used to save state and enable later jobs in the same pipeline to operate on the same effective state.
Each job is run on a different agent VM, so the usage of artifacts is an important technique to leverage.
By organizing our pipeline into distinct stages and jobs, we can accomplish specific requirements, such as controlling and flexibility.
Multi-Stage Pipelines allow a great degree of control and flexibility, with the ability to associate approval requirements with specific environments.
The pipeline looks like a series of stages, jobs, and steps when viewed in the Azure DevOps UI.
A fresh viewpoint: Azure Devops Artifacts Icon
CLI
The Azure Command Line Interface (CLI) is a popular, multi-platform tool that provides a convenient way to create a logical Azure SQL server and an Azure SQL DB.
It's worth noting that Azure CLI is idempotent, meaning it ensures that nothing is done if the target already exists. This is because it internally reduces down to an underlying Databases – Create Or Update API call.
Consider reading: Azure Devops Cli
In specific cases, if the target exists but has a different configuration, the target will be automatically updated to match the required configuration. For example, if the target DB exists but has a different service (tier) objective, it will be updated to the desired service objective.
Not all Azure CLI operations are idempotent, so it's essential to test and validate their behavior in your specific scenario. Fortunately, the source code for the Azure CLI is available on GitHub, making it feasible to determine the intended behavior outside of just empirical testing.
On a similar theme: Azure Service Connection
Sources
- https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/deploying-dapacs-to-azure-sql-via-azure-devops-pipelines/ba-p/4227385
- https://www.bensampica.com/post/azsqlbicepefcore/
- https://joeblogs.technology/2020/05/techniques-for-automating-sql-database-releases-using-azure-devops/
- https://devblogs.microsoft.com/azure-sql/continuous-delivery-for-azure-sql-db-using-azure-devops-multi-stage-pipelines/
- https://stackoverflow.com/questions/56061052/get-a-sql-server-express-during-a-pipeline-build-azure-devops
Featured Images: pexels.com