Azure DTUs are a way to measure the performance of your database in the cloud. They're a key component of Azure's pricing model.
Each DTU is based on a set of metrics, including CPU, memory, and storage. This combination determines the level of performance your database will have.
Azure offers different pricing models for DTUs, including the Basic, Standard, and Premium tiers. The Basic tier is ideal for small databases, while the Premium tier is best suited for large, complex databases.
The number of DTUs you need depends on your database's workload and the level of performance you require.
Workload Planning
Workload planning is crucial for determining the right amount of resources for your Azure SQL Database. Pools are well suited for databases with a low resource-utilization average and relatively infrequent utilization spikes.
To plan your workload, you need to determine the number of DTUs needed. If you're migrating an existing workload, use SKU recommendations to approximate the number of DTUs needed.
When analyzing existing workloads, use query-performance insights to understand your database-resource consumption and gain deeper insights for optimizing your workload. The sys.dm_db_resource_stats dynamic management view (DMV) lets you view resource consumption for the last hour, while the sys.resource_stats catalog view displays resource consumption for the last 14 days at a lower fidelity of five-minute averages.
Calculate Workload Requirements
Calculating workload requirements is crucial for a smooth migration to SQL Database.
To determine the number of DTUs needed, you can use SKU recommendations for an existing on-premises or SQL Server virtual machine workload.
You can also use query-performance insights to understand your database-resource consumption and optimize your workload.
The sys.dm_db_resource_stats dynamic management view shows resource consumption for the last hour, giving you a clear picture of your current usage.
The sys.resource_stats catalog view displays resource consumption for the last 14 days, but at a lower fidelity of five-minute averages.
Determine Utilization
To determine the average percentage of DTU/eDTU utilization relative to the DTU/eDTU limit of a database or an elastic pool, use the following formula: avg_dtu_percent = MAX(avg_cpu_percent, avg_data_io_percent, avg_log_write_percent).
The input values for this formula can be obtained from sys.dm_db_resource_stats, sys.resource_stats, and sys.elastic_pool_resource_stats DMVs.
In other words, to determine the percentage of DTU/eDTU utilization toward the DTU/eDTU limit of a database or an elastic pool, pick the largest percentage value from the following: avg_cpu_percent, avg_data_io_percent, and avg_log_write_percent at a given point in time.
The DTU limit of a database is determined by CPU, reads, writes, and memory available to the database. However, because the SQL Database engine typically uses all available memory for its data cache to improve performance, the avg_memory_usage_percent value will usually be close to 100 percent, regardless of current database load.
Compare Service Tiers
When choosing a service tier, you'll want to consider your business continuity, storage, and performance requirements.
Your target workload is likely to be development and production.
For uptime, all service tiers offer a 99.99% SLA.
Backup options vary by tier, with Basic offering geo-redundant, zone-redundant, or locally redundant storage, while Premium offers locally-redundant, zone-redundant, or geo-redundant storage.
CPU and IOPS (input/output operations per second) also differ by tier, with Basic offering low CPU and 1-4 IOPS per DTU, while Premium offers medium and high CPU and more than 25 IOPS per DTU.
Here's a comparison of the three service tiers:
The Basic, S0, and S1 service objectives provide less than one vCore (CPU) and are best suited for development, testing, and other infrequently accessed workloads.
Resource Configuration
In the DTU model, customers can't choose the hardware configuration used for their databases, but it's possible for a database to be moved to different hardware due to scaling or infrastructure changes.
A database can be moved to different hardware if it's scaled up or down to a different service objective, or if the current infrastructure is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of life.
The DTU model guarantees that the throughput and response time of the DTU benchmark workload will remain substantially identical as the database moves to a different hardware type, as long as its service objective stays the same.
However, for workloads other than the DTU benchmark, it's possible to see performance differences if the database moves from one type of hardware to another, due to different hardware configurations and features.
Customers can use the vCore model to choose their preferred hardware configuration during database creation and scaling, where detailed resource limits of each service objective in each hardware configuration are documented for single databases and elastic pools.
Units
In Azure SQL Database, compute sizes are expressed in terms of Database Transaction Units (DTUs) for single databases and elastic Database Transaction Units (eDTUs) for elastic pools.
A database transaction unit (DTU) represents a blended measure of CPU, memory, reads, and writes. This bundled measure of compute, storage, and I/O resources provides a predictable level of performance for your database.
Doubling the DTUs by increasing the compute size of a database equates to doubling the set of resources available to that database.
A Premium service tier P11 database with 1750 DTUs provides 350 times more DTU compute power than a basic service tier database with 5 DTUs.
To better understand the resources allocated for your database, consider the following:
- DTUs are a useful metric for comparing the relative resources allocated for databases at different compute sizes and service tiers.
- Each DTU represents a fixed amount of included storage, fixed retention period for backups, and fixed price.
- Service tiers in the DTU-based purchasing model provide flexibility of changing compute sizes with minimal downtime.
Workloads That Benefit from Elastic Resources
Workloads with low resource-utilization averages and infrequent spikes are well-suited for elastic pools.
Elastic pools provide a simple, cost-effective solution to manage performance goals for multiple databases with widely varying and unpredictable usage patterns.
Databases in an elastic pool use a single instance of the database engine and share the same pool of resources, measured by elastic database transaction units (eDTUs).
You can add more eDTUs to an existing pool with minimal database downtime, or remove them at any time if no longer needed.
Elastic pools simplify management tasks and provide a predictable budget for the pool.
Individual databases can autoscale within configured boundaries, consuming more eDTUs under heavier loads and fewer eDTUs under lighter loads.
Databases with no load consume no eDTUs, making it a cost-effective solution.
Hardware Configuration
In the DTU-based purchasing model, customers can't choose the hardware configuration used for their databases.
A database usually stays on a specific type of hardware for a long time, often multiple months, but there are exceptions.
Certain events can cause a database to be moved to different hardware, such as scaling up or down to a different service objective.
The DTU model guarantees that the throughput and response time of the DTU benchmark workload will remain substantially identical as the database moves to a different hardware type, as long as its service objective stays the same.
However, for workloads other than the DTU benchmark, performance differences can be seen if the database moves from one type of hardware to another.
The vCore model allows customers to choose their preferred hardware configuration during database creation and scaling.
Detailed resource limits of each service objective in each hardware configuration are documented for single databases and elastic pools in the vCore model.
To see actual resource governance limits for a database or elastic pool, you can query the sys.dm_user_db_resource_governance view.
This view returns one row for a single database and a row for each database in the pool for databases in an elastic pool.
Storage and Elastic Pool Limits
Storage and Elastic Pool Limits are key considerations when working with Azure DTUs.
The maximum storage size for a single database varies depending on the tier: 2 GB for Basic, 1 TB for Standard and Premium.
Elastic pool limits are also tier-dependent. For example, the maximum storage size per pool is 156 GB for Basic, 4 TB for Standard and Premium.
Here's a breakdown of the maximum storage and DTU limits for each tier:
Note that in certain regions, the Premium tier has a storage limit of 1 TB, not 4 TB.
Benchmarking and Pricing
The DTU benchmark is a simulated real-world database workload used to calibrate physical characteristics such as CPU, memory, and IO.
To estimate the number of DTUs your database requires, you can use the Azure DTU calculator. This will help you determine the processing capacity needed for your database.
Azure provides a variety of instance types, each with a specific number of DTUs and storage capacity. Here's a breakdown of the DTU pricing for Azure SQL using the General Purpose/Standard service tier:
Keep in mind that all instance types come with a minimum of 250 GB of included storage.
Pricing and Cost Models
There are two pricing models for Azure SQL Databases: DTU and vCore. You can easily migrate from the DTU to vCore based purchase model with minimal downtime for migration.
Azure SQL Databases uses a formula to map DTU to vCore: every 100 DTUs Standard tier equals 1 vCore of General Purpose, and every 125 DTUs Premium equals 1 vCore of Business Critical. Note that this rule gives an approximate mapping and doesn't consider hardware generations or elastic pool.
Here's a breakdown of the DTU pricing for Azure SQL using the General Purpose/Standard service tier in the West US 2 region:
*All options come with minimal Included Storage of 250GB. Additional data charges apply for extra storage and long-term retention.
Based Models
Azure SQL Databases has two pricing models – DTUs and vCores. Both models are designed to help you manage your database costs.
Azure SQL Databases uses DTU to measure database performance. DTU stands for Database Transaction Unit.
DTUs are based on a combination of CPU, memory, and storage resources. This means that the more resources your database uses, the more DTUs it will require.
DTU pricing is calculated based on the number of DTUs your database uses. The more DTUs your database uses, the higher your costs will be.
vCores, on the other hand, measure database performance based on the number of virtual CPU cores.
Pricing and Cost Models
Azure SQL Database offers two pricing models: DTU and vCore. The DTU model measures compute resources required to run your SQL database, with pricing covered below for the General Purpose/Standard service tier.
You can calculate your DTU requirements using Azure's DTU calculator, selecting from instance types or pool sizes for elastic pool deployment options. Each type offers a certain number of DTUs and a certain amount of storage.
There are 18 "flavors" of SQL database, each with its own pricing. You can view the costs associated with each service tier in the DTU model.
Azure SQL Database pricing is bundled, making it easy to estimate monthly costs upfront. However, you'll pay separately for compute resources, storage, and backup retention in the vCore model.
The vCore model offers more flexibility in managing costs, with separate charges for compute and storage. You can view pricing details for logical cores, memory, and storage used in the vCore model.
Here are the instance types and their corresponding DTU and storage capacities:
Additional data charges apply if you use extra data storage beyond the included storage, with a price of $0.17 per GB-month.
You can migrate from the DTU to vCore based purchase model with minimal downtime, using the following formula to map DTU to vCores:
- Every 100 DTUs Standard tier = 1 vCore of General Purpose
- Every 125 DTUs Premium = 1 vCore of Business Critical
Compute Options
Azure DTUS offers two main compute options: Serverless Compute and Provisioned Compute.
With Serverless Compute, you can dynamically select between 0.5 and 16 vCores, and between 2.02 GB and 48 GB of memory. The price is $0.5218 per vCore hour and $0.115 per GB-month.
You'll also need to consider additional costs for backup storage, which is $0.20 per GB-month for point-in-time restore and $0.05 per GB-month for long-term retention.
The Provisioned Compute option requires you to select a number of vCores and your capacity and prices are determined accordingly. You can choose from a range of vCores, from 2 to 80.
Here's a breakdown of the vCore pricing for Azure SQL using the General Purpose/Standard service tier, for a Single Database deployment, in the West US 2 region:
You'll also be charged separately for storage, using the Premium tier of the Azure Blob Storage service, at a cost of $0.115 per GB/month, with additional charges for point-in-time recovery and long-term retention.
Sources
- https://learn.microsoft.com/en-us/azure/azure-sql/database/service-tiers-dtu
- https://www.sqlshack.com/dtu-and-vcore-based-models-for-azure-sql-databases/
- https://bluexp.netapp.com/blog/azure-cvo-blg-azure-sql-pricing-simplified
- https://pragmaticworks.com/blog/dtu-vs-vcore-in-azure-sql-database
- https://www.mssqltips.com/sqlservertip/7000/azure-sql-db-purchasing-model-dtu-vcore/
Featured Images: pexels.com